0% found this document useful (0 votes)
13 views

BLOCK 3

Uploaded by

Shumaila Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

BLOCK 3

Uploaded by

Shumaila Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Introduction to Layer

Functionality and
Design Issues
UNIT 1 INTRODUCTION TO LAYER
FUNCTIONALITY AND DESIGN
ISSUES
Structure Page
Nos.
1.0 Introduction 5
1.1 Objectives 6
1.2 Connection Oriented vs. Connection-less Services 6
1.2.1 Connection-oriented Services
1.2.2 Connection-less Services
1.3 Implementation of the Network Layer Services 7
1.3.1 Packet Switching
1.3.2 Implementation of Connection -oriented Services
1.3.3 Implementation of Connection-less Services
1.4 Comparison between Virtual Circuit and Datagram Subnet 11
1.5 Addressing 13
1.5.1 Hierarchical Versus Flat Address
1.5.2 Static vs. Dynamic Address
1.5.3 IP Address
1.6 Concept of Congestion 16
1.7 Routing Concept 17
1.7.1 Main Issues in Routing
1.7.2 Classification of Routing Algorithm
1.8 Summarys 20
1.9 Solutions/Answers 20
1.10 Further Readings 22

1.0 INTRODUCTION
In the previous blocks of this course, we have learned the basic functions of physical
layer and data link layer in networking. Now, in this chapter, we will go through the
functions of the network layer.
The network layer is at level three in OSI model. It responds to service requests from
the transport layer and issues service requests to the data link layer. It is responsible
for end-to-end (source to destination) packet delivery, whereas the data link layer is
responsible for node-to-node (hop-to-hop) packet delivery. Three important functions
of the network layers are:

• Path Determination: It determines the route taken by the packets from the
source to the destination.
• Forwarding: It forwards packets from the router’s input to the appropriate
router output.
• Call Setup: Some network architectures require router call setup along the path
before the data flows. To perform these functions, the network layer must be
aware of the topology of the communication subnet (i.e., set of routers,
communication lines).
For end-to-end delivery, the network provides two type of services i.e., connection
oriented service and connection less service to the transport layer. The network
layer services meet the following entries [ref.1].
• Transport layer should not be aware of the topology of the network (subnet).
• Services should be independent of the router technology.

5
Network Layer

In this unit, we will first go through the basic concepts of these services and will then
differentiate between these two. Then, we will introduce some other concepts like
routing and congestion.

1.1 OBJECTIVES

After going through this unit, you should be able to:

• define basic functions of the network layer;


• differentiate between connection oriented and connection less services;
• define the concept of addressing in networking;
• define congestion in the network layer;
• explain the concept of routing;
• explain the concept of pocket switching, and
• define packet switching network.

1.2 CONNECTION ORIENTED Vs.


CONNECTION- LESS SERVICES

In computer networks, delivery between source and destination can be accomplished


in either of the two ways:
• Connection-oriented services
• Connection-less services.

1.2.1 Connection-oriented Services


Connection-oriented services define a way of transmitting data between a sender and
a receiver, in which an end-to-end connection is established before sending any data.
After establishing a connection, a sequence of packets, (from the source to
destination), can be sent one after another. All the packets belonging to a message are
sent from the same connection. When all packets of a message have been delivered,
the connection is terminated.

In connection-oriented services, the devices at both the endpoints use a protocol


to establish an end-to-end connection before sending any data.

Connection-oriented service usually has the following characteristics:

i) The network guarantees that all packets will be delivered in order without loss
or duplication of data.

ii) Only a single path is established for the call, and all the data follows that path.

iii) The network guarantees a minimal amount of bandwidth and this bandwidth is
reserved for the duration of the call.

iv) If the network is over utilised, future call requests are refused.

Connection-oriented service is sometimes called a “reliable” network service


because:
• It guarantees that data will arrive in the proper sequence.
• Single connection for entire message facilitates acknowledgement process and
retransmission of damaged and lost frames.

6
Introduction to Layer
Functionality and
Design Issues
Connection-oriented transmission has three stages. These are:

(i) Connection establishment: In connection oriented services, before


transmitting data, the sending device must first determine the availability of the
other to exchange data and a connection must be established by which data can
be sent. Connection establishment requires three steps. These are:
a) First the sender computer requests the connection by sending a connection
request packet to the intended receiver.
(b) Then the receiver computer returns a confirmation packet to the requesting
computer.
(c) Finally, the sender computer returns a packet acknowledging the
confirmation.
(ii) Data transfer: After the connection gets established, the sender starts sending
data packets to the receiver.
(iii) Connection termination: After all the data gets transferred, the connection has
to be terminated. Connection termination also requires a three-way handshake
i.e.,
(a) First, the sender computer requests disconnection by sending a
disconnection request packet.
(b) Then, the receiver computer confirms the disconnection request.
(c) Finally, the sender computer returns a packet acknowledging the
confirmation.
Transmission Control Protocol (TCP) is a connection-oriented protocol.

1.2.2 Connection-less Services


Connection-less services define a way of communication between two network end
points in which, a message can be sent from one end point to another without prior
arrangement. The sender simply starts sending packets, addressed to the intended
recipient.

Connectionless service is a service that allows the transfer of information


among subscribers without the need for end-to-end connection establishment
procedures.

Connection-less service is sometimes known as “unreliable” network service.


Connection-less protocols are usually described as stateless because the endpoints
have no protocol-defined way of remembering where they are in a “conversation” of
message exchange.

The Internet Protocol (IP) and User Datagram Protocol (UDP) are connectionless
protocols, but TCP/IP (the most common use of IP) is connection-orientated.

1.3 IMPLEMENTATION OF THE NETWORK


LAYER SERVICES

In this section, we will examine how the network layer services are implemented.
Two different services are taken into consideration depending on the type of service
being offered. These two schemes are known as virtual circuit subnet (VC subnet) for
connection-oriented service and datagram subnet for connection-less services. A VC
subnet may be compared to the physical circuit required in a telephone setup. In a
connection-oriented service, a route from the source to the destination must be
established. In a datagram subnet, no advance set up is needed. In this case, packets
are routed independently. But, before we take up the implementation issues let us,

7
Network Layer

revisit the packet switching concepts once again. The services are implement through
a packet switched network.

1.3.1 Packet Switching

In the fourth unit of Block 1, we introduced the concept of packet switching. We


further elaborate in this section, in the context of the network layer. The network
layer operates in the packet switched network (or subnet) environment which
comprises several routers linked with transmission lines (leased or dial up) besides
user’s machines as shown in Figure 1.

Packet
Subnet
Destination
Source Machine Machine

R2 R3 Packet

R1 R4 R7

R6 R5
Router

LAN

Figure 1: A Packet switched network

This subnet works in the following manner. Whenever user wants to send a packet to
another users, s/he transmits the packet to the nearest router either on its own LAN or
over a point-to-point link to the carrier. The packet is stored for verification and then
transmitted further to the next router along the way until it reaches the final
destination machine. This mechanism is called packet switching.

But, why packet switching? Why not circuit switching? Now, let us discuss these
issues.

Circuit switching was not designed for packet transmission. It was primarily designed
for voice communication. It creates temporary (dialed) or permanent (leased)
dedicated links that are well suited to this type of communication [Ref. 2].

(i) Data transmission tend to be bursty, which means that packets arrive in spurts
with gaps in between. In such cases, transmission lines will be mostly idle
leading to wastage of resources if we use circuit switching for data transmission.

(ii) Single Data Rate: In circuit switching mechanism there is a single data rate for
the two end devices which limits flexibility and usefulness of circuit switched
connection for networks interconnection of a variety of digital devices.

(iii) No priority to transmission: Circuit switching treats all transmissions as equal.


But often, with data transmission we may be required to send a certain packet on
a high priority basis, which may not be implemented with the circuit switching
approach.
1.3.2 Implementation of Connection-oriented Services

8
Introduction to Layer
Functionality and
To implement connection-oriented services, we need to form a virtual-circuit Design Issues
subnet. This idea behind the creation of a VC is so that, a new route for every packet
sent.
In virtual circuits:
• First, a connection needs to be established.
• After establishing a connection, a route from the source machine to the
destination machine is chosen as part of the connection setup and stored in
tables inside the routers. This route is used for all traffic flowing over the
connection.
• After transmitting all the data packets, the connection is released. When the
connection is released, the virtual circuit is also terminated.
In a connection-oriented service, each packet carries an identifier that identifies the
virtual circuit it belongs to.
Now, let us take an example, consider the situation of a subnet in Figure 2. In this
figure, H1,H2 and H3 represent host machines and R1, R2, R3, R4, R5 and R6
represent routers. Processes are running on different hosts.
Here, host HI has established connection 1 with host H2. It is remembered as the first
entry in each of the routing tables as shown in Table 1. The first line of R1’s table
says that, if a packet bearing connection identifier 1 comes in from HI, it is to be sent
to router R3, and given connection identifier 1. Similarly, the first entry at R3 routes
the packet to R5, also with connection identifier 1.
Now, let us consider a situation in which, H3 also wants to establish a connection
with H2. It chooses connection identifier 1 (because it is initiating the connection and
this is its only connection) and informs the subnet to setup the virtual circuit. This
leads to the second row in the table. Note, that we have a conflict here because
although R1 can easily distinguish connection 1 packets from HI and connection 1
packets from H3, R3 cannot do this. For this reason, R1 assigns a different
connection identifier to the outgoing traffic for the second connection (No.2). In
order to avoiding conflicts of this nature, it is important that routers have the ability
to replace connection identifiers in outgoing packets. In some contexts, this is called
label switching.
Router
R4
H3
R2 Packet

1
R5 R6 H2
R1
LAN
4 2
H1
3
R3
Holt machine

Figure 2: Routing in a virtual circuit subnet


Table 1: Routing table for VC subnet
R1’s Table
H1 1 R3 1
H3 1 R3 2
in out

9
Network Layer

R3’s Table
R1 1 R5 1
R1 1 R5 2
in out

R5’s Table
R3 1 R6 1
R3 1 R6 1
in out
1.3.3 Implementation of Connection-less Services
In this section, we shall discuss the implementation of these services i.e., how
connection-less services are implemented in real networks. To implement
connection-less services, we need a datagram subnet.

In these services, packets are individually injected into the subnet and their routing
decisions are not dependent on each other (packets). Therefore, in connectionless
services, no advance setup is needed. In this context, the packets are frequently
called datagrams and the subnet is called a datagram subnet.

Now, let us take an example to learn how a datagram subnet works. Consider the
situation of Figure 3. In this Figure, H1 and H2 represent host machines and R1, R2,
R3, R4, R5 and R6 represent routers. Suppose, that the process running at host H1
has a long message to be transmited to a process running at H2 machine. To do so, it
transfers the message to the transport layer with appropriate instructions to deliver it
to the process running at H2. Where is the transfer layer process running, can you
figure out? Well, it may also be running on H1 but within the operating system. The
transport layer process adds a transport header to the front of the message and
transfers the message (also called TPDU) to the network layer, The network layer
too, might be running as another procedure within the operating system.

Let us assume, that the message is five times longer than the maximum packet size,
therefore, the network layer has to break it into five packets, 1,2, 3, 4 and 5 and send
each of them in turn to router R1 (because it is linked to R1) using some point-to-
point protocol. After this, the carrier (supported by ISP) takes over. Every router has
an internal table telling it where to send packets for each possible destination. Each
table entry is a pair consisting of a destination and the outgoing line to use for that
destination. Only directly-connected lines can be used. For example, in Figure 3, R1
has only two outgoing lines-to R2 and R3. So every incoming packet must be sent to
one of these routers.

As the packets arrive at R1 from H1, packets 1, 2, and 3 were stored briefly (to verify
their checksums). Then, each packet was forwarded to R3 according to R1’s table
(table not shown here). Packet 1 was then forwarded to R5 and from R5 to R6. When
it got to R6, it was encapsulated in a data link layer frame and sent to H2. Packets 2
and 3 follow the same route.

However, something different happened to packet 4 and 5. When it got to R1 it was


sent to router R2, even though it has the same destination. Due to some reason (for
ex. congestion), R1 decided to send packet 4 and 5 via a different route than that of
the first three.

The algorithm that manages the tables and makes the routing decisions is known as
the routing algorithm. In next unit, we shall study routing algorithms. Students are

10
Introduction to Layer
Functionality and
requested to refer to [Ref. 1] for further study on the implementation of connection Design Issues
oriented and connection less services. You should focus on connecting routing tables.
Router
Host machine

4 R4
H1 Packet
Destination
R2 machine
5 1
Process
R5 R6 H2
R1
LAN
2
3
R3

Figure 3: Routing in a datagram subnet

1.4 COMPARISON BETWEEN VIRTUAL


CIRCUIT AND DATAGRAM SUBNET

Both virtual circuits and datagrams have their pros and cons. We shall compare them
on the basis of different parameters. These various parameters are:

• Router memory space and bandwidth

Virtual circuits allow packets to contain circuit numbers instead of full destination
addresses. A full destination address in every packet may represent a significant
amount of overhead, and hence waste bandwidth.

• Setup time vs. address parsing time

Using virtual circuits requires a setup phase, which takes time and consumes memory
resources. However, figuring out what to do with a data packet in a virtual-circuit
subnet is easy: the router simply uses the circuit number to index into a table to find
out where the packet goes. In a datagram subnet, a more complicated lookup
procedure is required to locate the entry for the destination.

• Amount of table space required in router memory

A datagram subnet needs to have an entry for every possible destination, whereas a
virtual-Circuit subnet just needs an entry for each virtual circuit.

• Quality of service

Virtual circuits have some advantages in guaranteeing quality of service and avoiding
congestion within the subnet because resources (e.g., buffers, bandwidth, and CPU
cycles) can be reserved in advance, when the connection is established. Once the
packets start arriving, the necessary bandwidth and router capacity will be there. With
a datagram subnet, congestion avoidance is more difficult.

• Vulnerability

11
Network Layer

Virtual circuits also have a vulnerability problem. If, a router crashes and loses its
memory, even if it comes back a second later, all the virtual circuits passing through
it will have to be aborted. In contrast, if a datagram router goes down, only those
users whose packets were queued in the router at the time will suffer, and maybe not
even all those, depending upon whether they have already been acknowledged. The
loss of a communication line is fatal to virtual circuits using it but can be easily
compensated for if datagrams are used.

• Traffic balance
Datagrams also allow the routers to balance the traffic throughout the subnet, since
routes can be changed partway through a long sequence of packet transmissions.
A brief comparison between a virtual circuit subnet and a datagram subnet is given
in Table 2. Students should refer to Reference 1 for further discussion.

Table 2: Comparison between Virtual Circuit and Datagram Subnets (Source: Ref. [1])

Issue Datagram subnet Virtual-circuit subnet


Addressing machine Each datagram contains Each datagram contains a
the full source and Small VC number
destination address
Referencing of Circuit Not needed Required
setup
State information by a Routers do not hold state Each VC requires router
router information about table
connections. space per connection.
Routing procedure Each datagram is Route is selected when
routed independently. VC is set up and all
the packets follow all
routes.
Effect of router failures None, except the datagram All VCs that passed
lost during the crash through the failed router
are terminated and the
new virtual circuit is
established
Quality of service Difficult Easy
Congestion control Difficult Easy
mechanism

 Check Your Progress 1


Give right choice for the following:

1) Connection-oriented service is sometimes called a …………network service.


(a) Reliable
(b) Unreliable.

2) ………………is a connection-oriented protocol.


(a) UDP
(b) TCP
(c) IP

3) Why are connection oriented services known as reliable services?


Explain brifly.

12
Introduction to Layer
Functionality and
………………………………………………………………………………… Design Issues

…………………………………………………………………………………
…………………………………………………………………………………
………

1.5 ADDRESSING

Network addresses identify devices separately or as members of a group. Addressing


is performed on various layers of the OSI model. Thus, schemes used for addressing
vary on the basis of the protocol used and the OSI layer. On this basis, internetwork
addresses can be categorised into three types. These are:

(a) Data link layer addresses

(b) Media Access Control (MAC) addresses

(c) Network layer addresses.

a) Data Link Layer Addresses

Data-link layer addresses sometimes are referred to as physical or hardware


addresses, uniquely identify each physical network connection of a network device.
Usually data-link addresses have a pre-established and fixed relationship to a specific
device.

End systems generally have only one physical network connection and thus, have
only one data-link address. Routers and other internetworking devices typically have
multiple physical network connections and therefore, have multiple data-link
addresses.

b) Media Access Control (MAC) Addresses

Media Access Control (MAC) addresses are used to identify network entities in
LANs that implement the IEEE MAC addresses of the data link layer. These
addresses are 48 bits in length and are expressed as 12 hexadecimal digits.

MAC addresses are unique for each LAN interface. These addresses consist of a
subset of data link layer addresses. Figure 4 illustrates the relationship between MAC
addresses, data-link addresses, and the IEEE sub-layers of the data link layer.

Figure 4: MAC addresses, data-link addresses, and the IEEE Sub-layers of


the data link layer are all related

c) Network Layer Addresses

Network addresses are sometimes called virtual or logical addresses. These addresses
are used to identify an entity at the network layer of the OSI model. Network
addresses are usually hierarchical addresses.

1.5.1 Hierarchical vs. Flat Address

13
Network Layer

Usually Internetwork addresses are of two types:

(i) Hierarchical address

Hierarchical addresses are organised into a number of subgroups, each successively


narrowing an address until it points to a single device as a house address.

(ii) Flat address

A flat address space is organised into a single group, such as, your enrolment no.
Hierarchical addressing offers certain advantages over flat-addressing schemes. In
hierarchical addressing, address sorting and recalling is simplified using the
comparison operation. For example, “India” in a street address eliminates any other
country as a possible location. Figure 5 illustrates the difference between hierarchical
and flat address spaces.

Figure 5: Hierarchical and flat address spaces differ in comparison operations

1.5.2 Static vs. Dynamic Address

In networking, the address to a device can be assigned in either of these two ways:

(i) Static address assignment: Static addresses are assigned by a network


administrator according to a preconceived internetwork addressing plan. A static
address does not change until the network administrator changes it manually.

(ii) Dynamic addresses: Dynamic addresses are obtained by devices when they are
attached to a network, by means of some protocol-specific process. A device
using dynamic address often has a different address each time it connects to the
network.

1.5.3 IP Address

IP address is a unique address i.e., no two machines on the Internet can have same IP
address. It encodes it’s network number and host number. Every host and router, in a
internetwork has an IP address.

14
Introduction to Layer
Functionality and
The format of an IP address is a 32-bit numeric address written as four numbers Design Issues
separated by periods. Each number can be zero to 255. For example, 1.160.10.240
could be an IP address. These numbers defines three fields:

(i) Class type: Indicate the IP class, to which the packet belongs:

(ii) Network identifier (netid): Indicates the network (a group of computers).


Networks with different identifiers can be interconnected with routers.

(iii) Host identifier (hostid): Indicates a specific computer on the network.

Class type Netid Hostid

Figure 6: IP address

You will read more details on IP address in unit 4.

 Check Your Progress 2


1) What are three types of internetwork addresses? Explain in brief.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………
2) Differrentiate between following:

(i) Hierarchical address and Flat address


(ii) Static and Dynamic address.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………

1.6 CONCEPT OF CONGESTION

In the network layer, when the number of packets sent to the network is greater than
the number of packets the network can handle (capacity of network), a problem
occurs that is known as congestion. This is just like congestion on a road due to
heavy traffic. In networking, congestion occurs on shared networks when, multiple
users contend for access to the same resources (bandwidth, buffers, and queues).

When the number of packet sent into the network is within the limits, almost all
packets are delivered, however, the traffic load increases beyond the network
capacity. As a result the system starts discarding packets.

Figure 7 shows congestion in a network due to too much traffic.

15
Network Layer

Maximum capacity of the network to

No of Packets delivered to the


Delivery of
packet

destination
Traffic discarded
due to congestion

Figure 7: Congestion

Because routers receive packets faster than they can forward them, one of these two
things may happen in case of congestion:

• The subnet may prevent additional packets from entering the congested region
until those already present can be processed, or

• The congested routers can discard queued packets to make room for those that
are arriving currently.

Congestion Control

Congestion control refers to the network mechanism and techniques used to control
congestion and keep the load below the networks capacity.

Congestion handling can be divided into the following:


• Congestion recovery: Restore the operating state of the network when demand
exceeds capacity.
• Congestion avoidance: Anticipate congestion and avoid it so that congestion
never occurs.

By storing content closer to users i.e., caching can be the best congestion control
scheme. In this manner, majority of the traffic could be obtained locally rather than
being obtained from distant servers along routed paths that may experience
congestion.
Some basic techniques to manage congestion are:

1) End-system flow control: This is not a congestion control scheme. It is a way


of preventing the sender from overrunning the buffers of the receiver.

2) Network congestion control: In this scheme, end systems throttle back in


order to avoid congesting the network. The mechanism is similar to end-to-end
flow controls, but the intention is to reduce congestion in the network, not at
the receiver’s end.

3) Network-based congestion avoidance: In this scheme, a router detects that


congestion may occur and attempts to slow down senders before queues
become full.

4) Resource allocation: This technique involves scheduling the use of physical


circuits or other resources, perhaps for a specific period of time. A virtual
circuit, built across a series of switches with a guaranteed bandwidth is a form

16
Introduction to Layer
Functionality and
of resource allocation. This technique is difficult, but can eliminate network Design Issues
congestion by blocking traffic that is in excess of the network capacity.

1.7 ROUTING CONCEPT

Suppose, you need to go from location (A) to another location (B) in your city and
more than one routes are available for going from location A to location B. In this
case, first you decide the best route for going from location A to B. This decision
may be based on a number of factors such as distance (route with minimum traffic
distance), time (route with minimum traffic jam), cost etc. After deciding the best
route you start moving on that route. The same principle is at work here, in computer
networks also. While transferring data packets in a packet switched network the same
principle is applied, and this is known as routing. Now, we can say that routing is the
act of moving data packets in packet-switched network, from a source to a
destination. Along the way, several intermediate nodes typically are encountered.

Routing occurs at Layer 3 (the network layer) in OSI reference model. It involves two
basic activities:

• Determining optimal routing paths.


• Forwarding packets through a subnet. In this section, we will look at several
issues related to routing.

1.7.1 Main Issues in Routing

Routing in a network typically involves a rather complex collection of algorithms that


work more or less independently mainly due to the environment in which it works
and yet support each other by exchanging services or information. The complex is
due to a number of reasons. First, routing requires coordination between all the nodes
of the subnet rather than just a pair of modules as, for example, in data link and
transport layer protocols. Second, the routing system must cope with transmission
link and router failures, requiring redirection of traffic resetting up of new VC and an
update of the routing tables databases maintained by the system. Third, to achieve
high performance, the routing algorithm may need to modify its routes when some
areas within the network become congested.

There are two main performance measures that are substantially affected by the
routing algorithm – throughput (quantity of service) and latency (average packet
delay when quality of service is required). The parameter throughput refers to the
number of packets delivered in the subnet. Routing interacts with flow control in
determining these performance measures by means of a feedback mechanism shows
in Figure 8 When the traffic load offered by the external resources to the subnet is
within the limits of the carrying capacity of the subnet, it will be fully accepted into
the network, that is,

Delay
Offered load Throughput
Routing Delay
Flow control

Rejected load

Figure 8: Interaction of routing and flow control

Network throughput = offered packets

17
Network Layer

But, when the offered load exceeds the limit, the packet will be rejected by the flow
control algorithm and

Network Throughput = offered packets – rejected packets

The traffic accepted into the network will experience an average delay per packet that
will depend on the routes chosen by the routing algorithm.

However, throughput will also be greatly affected (if only indirectly) by the routing
algorithm because typical flow control schemes operate on the basis of striking a
balance between throughput and delay. Therefore, as the routing algorithm is more
successful in keeping delay low, the flow control algorithm allows more traffic into
the network, while the precise balance between delay and throughput will be
determined by flow control, the effect of good routing under high offered load
conditions is to realise a more favourable delay-throughput curve along which flow
control operates, as shown in Figure 9.

Poor routing

Delay
Good routing

Thoughput
(Source: Ref.[2])

Figure 9: Throughput vs. delay graph

Let us take an example to understand the intencacy. In the network of Figure 10, all
links have a capacity of 10 units. There is a single destination (R6) and two origins
(R1 and R2). The offered pockets from each of R1 and R2 to R5 and R6 is 5 units.
Here, the offered load is light and can easily be accommodated with a short delay by
routing along the leftmost and rightmost paths, 1-3-6 and 2-5-6, respectively. If
instead, however, the routes 1-4-6 and 2-4-6 are used, the flow on link (4,6) with
equal capacity, resulting in very large delays.
5 units 5 units

Origin 1 2 Origin
4

3 5

All links have a capacity of 10 units


6
Destination

(Source ref. [2])

18
Introduction to Layer
Functionality and
Figure 10: Example a sub network Design Issues
Observe Figure 10 once again. All links have a capacity of 10 units. If, all traffic is
routed through the middle link (R4,R6), congestion occurs. If, instead, paths (R1-R3-
R6) and (R2-R5-R6) are used , the average delay is shorter/lesses.

In conclusion, the effect of good routing is to increase throughput for the same value
of average delay per packet under high offered load conditions and decrease average
delay per packet under low and moderate offered load conditions. Furthermore, it is
evident as low as possible for any given level of offered load. While this is easier said
than done, analytically. Students are requested to refer to (Ref. 2) for further
discussion. You are requested to further enhance your knowledge by reading [Ref. 2].

1.7.2 Classification of Routing Algorithm


Routing can be classified into the following types:
(i) Adaptive Routing
In adaptive routing; routing decisions are taken for each packet separately i.e., for the
packets belonging to the same destination, the router may select a new route for each
packet. In it, routing decisions are based on condition or the topology of the network.

(ii) Non-adaptive Routing


In non-adaptive routing; routing decisions are not taken again and again i.e., once the
router decides a route for the destination, it sends all packets for that destination on
that same route. In it routing decisions are not based on condition or the topology of
the network.

 Check Your Progress 3


1) What is congestion in the network? Explain in brief.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………
2) Explain various congestion control schemes.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………

3) What is routing? What are various activities performed by a router?


…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………
4) Differentiate between adaptive and non-adaptive routing.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
………

19
Network Layer

1.8 SUMMARY
In this unit, we looked at the two types of end-to-end delivery services in computer
networks i.e., connection oriented service and connection less service. Connection-
oriented service is a reliable network service, while connection-less service is
unreliable network service. Then we studied the concept of addressing. A network
addresses identifies devices separately or as members of a group. Internetwork
addresses can be categorised into three types i.e., data link layer addresses, media
access control (MAC) addresses and network layer addresses. After this, we studied a
problem that occurs at the network layer level i.e., congestion. It is a problem that
occurs due to overload on the network. Then, we discussed routing. It is the act of
moving data packets in packet-switched network, from a source to a destination. We
also examined the relationship between routing and flow control through an example
and digrams.

1.9 SOLUTIONS/ANSWERS
Check Your Progress 1
1) a
2) b
3) Connection-oriented service is sometimes called a “reliable” network
service because:
• It guarantees that data will arrive in the proper sequence.
• Single connection for entire message facilitates acknowledgement process
and retransmission of damaged and lost frames.

Check Your Progress 2


1) Three types of internetwork addresses are:
(i) Data link layer addresses:
Data-link layer addresses sometimes are referred to as physical or hardware
addresses, because they uniquely identifies each physical network connection
of a network device.
(ii) Media Access Control (MAC) addresses:
Media Access Control (MAC) addresses are used to identify network entities in
LANs that implement the IEEE MAC addresses of the data link layer.
(iii) Network layer addresses:
Network addresses are sometimes called virtual or logical addresses. These
addresses are used to identify an entity at the network layer of the OSI model.

2) (a) Hierarchical addresses are organised into a number of subgroups, each


successively narrowing an address until it points to a single device as an house
address.
A flat address space is organised into a single group, such as,
your enrolment no.

(b) Static addresses are assigned by a network administrator according to a


preconceived internetwork addressing plan. A static address does not change
until the network administrator manually changes it.

Dynamic addresses are obtained by devices when they are attached to a


network, by means of some protocol-specific process. A device using dynamic
address often has a different address each time that it connects to the network.

20
Introduction to Layer
Functionality and
Check Your Progress 3 Design Issues

1) In the network layer, when the number of packets sent to the network is greater
then the number of packets the network can handle (capacity of network), a
problem occurs that is known as congestion.

2) Congestion handling can be broadly divided into the following:

Congestion recovery: Restores the operating state of the network when


demand exceeds capacity.

Congestion avoidance: Anticipates congestion and avoids it so that congestion


never occurs.

Some basic techniques to manage congestion are:

(a) End-system flow control: This is not a congestion control scheme. It is a


way of preventing the sender from overrunning the buffers of the receiver.
(b) Network congestion control: In this scheme, end systems throttle back in
order to avoid congesting the network. The mechanism is similar to end-to-
end flow controls, but the intention is to reduce congestion in the network,
not at the receiver’s end.
(c) Network-based congestion avoidance: In this scheme, a router detects that
congestion may occur and attempts to slow down senders before queues
become full.
(d) Resource allocation: This technique involves scheduling the use of
physical circuits or other resources, perhaps for a specific period of a time.
A virtual circuit, built across a series of switches with a guaranteed
bandwidth is a form of resource allocation. This technique is difficult, but
can eliminate network congestion by blocking traffic that is in excess of the
network capacity.

3) Routing is the act of moving data packets in packet-switched network, from a


source to a destination.
A Router is a device used to handle the job of routing. It identifies the
addresses on data passing through it, determines which route the transmission
should take and collects data in packets which are then sent to their
destinations.

4) In adaptive routing, routing decisions are taken on each packet separately i.e.,
for the packets belonging to the same destination, the router may select a new
route for each packet.

In non-adaptive routing, routing decisions are not taken again and again i.e.,
once the router decides a route to the destination, it sends all packets for that
destination on the same route.

1.10 FURTHER READINGS

1) Computer Network, A. S. Tarenbaum, 4th edition, Prentice Hall of India,


New Delhi, 2002.

2) Data Network, Drnitri Berteskas and Robert Galleger, Second edition,


Prentice Hall of India, 1997, New Delhi.

21
Routing Algorithms
UNIT 2 ROUTING ALGORITHMS
Structure Page Nos.

2.0 Introduction 23
2.1 Objectives 23
2.2 Flooding 23
2.3 Shortest Path Routing Algorithm 25
2.4 Distance Vector Routing 26
2.4.1 Comparison
2.4.2 The Count-to-Infinity Problem
2.5 Link State Routing 30
2.6 Hierarchical Routing 33
2.7 Broadcast Routing 35
2.8 Multicast Routing 36
2.9 Summary 39
2.10 Solutions/Answer 40
2.11 Further Readings 41

2.0 INTRODUCTION

As you have studied earlier, the main function of the network layer is to find the best
route from a source to a destination. In routing, the route with the minimum cost is
considered to be the best route. For this purpose, a router plays an important role. On
the basis of cost of a each link, a router tries to find the optimal route with the help of
a good routing algorithm. There are a large number of routing algorithms. These
algorithms are a part of the network layer and are responsible for deciding on which
output line an incoming packet should be transmitted. Some of these routing
algorithms are discussed in this unit.

2.1 OBJECTIVES

After going through this unit, you should be able to:

• understand how the shortest path routing algorithm works;


• draw a spanning tree;
• understand the functioning of distance vector routing and link state routing, and
• understand and implement multicast routing.

2.2 FLOODING

Consider an example of any network topology (VC subnet) in which, there are some
link failures or in which, a few routers are not operational. These failures will cause
changes in the network typology which have to be communicated to all the nodes in
the network. This is called broadcasting. These could be many such examples of
broadcasting in which the message has to be passed on to the entire network.

A widely used broadcasting method, known as flooding, operates in the following


manner. The source node sends its information in the form of a packet to those nodes
in a subnet to which it is directly connected by a link. These nodes relay it further to
their neighbours who are also directly connected, and so on, until the packet reaches

23
Network Layer all nodes in the network. For example, in the Figure 1(a), R1 will send its packets to
R2 and R3. R2 will send the packet to R5 and R4. Two additional rules are also
applied in order to limit the number of packets to be transmitted. First, a node will not
relay the packet back to the node from which the packet was obtained. For example,
R2 will not send the packet back to R1 if, it has received it from R1. Second, a node
will transmit the packet to its neighbours at most once; this can be ensured by
including on the packet the ID number of the origin node and a sequence number,
which is incremented with each new packet issued by the origin node. By storing the
highest sequence number received for each node, and by not relaying packets with
sequence numbers that are less than or equal to the one stored, a node can avoid
transmitting the same packet more than once, on each of its incident links. On
observing these rules, you will notice that, links need not preserve the order of packet
transmissions; the sequence numbers can be used to recognise the correct order. The
following figure gives an example of flooding and illustrates how, the total number of
packet transmission per packet broadcast lies between L and 2L, where L is the
number of bi-directional links of the network. In this Figure 1(a) Packet broadcasting
from router R1 to all other nodes by using flooding [as in Figure 1(a)] or a spanning
tree [as in Figure 1(b)]. Arrows indicate packet transmissions at the time shown. Each
packet transmission time is assumed to be one unit. Flooding requires at least as many
packet transmissions as the spanning tree method and usually many more. In this
example, the time required for the broadcast packet to reach all nodes is the same for
the two methods. In general, however, depending on the choice of the spanning tree,
the time required for flooding may be less than for the spanning tree method. The
spanning tree is used to avoid the looping of packets in the subnet.

Packet
R2 R5
R2 R5
2 2
0 1 0 2
1 2 B 1
A R1 R7
R4 2 1
1 R1 R4
R7
2
2 0
0 R3 R6
R3 1 R6 1
Router

Figure 1(a): Flooding Figure 1(b): Spanning Tree

Figure 1: Broadcasting (Source: Ref. [2])

Student should refer to [2] for further reading on this topic.

 Check Your Progress 1


1) Following statements are True or False

(i) Dijkstra algorithm divides the node into two sets i.e., tentative and
permanent. T F

(ii) Flooding generates lots of redundant packets. T F

(iii) Flooding discovers only the optimal routes T F

24
2) What is a spanning tree? Routing Algorithms

…………………………………………………………………………………...
…………………………………………………………………………………..
……………………………………………………………………………….….

2.3 SHORTEST PATH ROUTHING ALGORITHM

Shortest path routing algorithm is a simple and easy to understand technique. The
basic idea of this technique is to build a graph of the subnet, with each node of the
graph representing a router and each arc of the graph representing a communication
line i.e., link. For finding a route between a given pair of routers, the algorithm just
finds the shortest path between them on the graph. The length of a path can be
measured in a number of ways as on the basis of the number of hops, or on the basis
of geographic distance etc.

There are a number of algorithms for computing the shortest path between two nodes
of a graph. One of the most used algorithm is the Dijkstra algorithm. This is
explained below:

In this algorithm, each node has a label which represents its distance from the source
node along the best known path. On the basis of these labels, the algorithm divides
the node into two sets i.e., tentative and permanent. As in the beginning no paths are
known, so all labels are tentative. The Algorithm works in the following manner:

1) First mark source node as current node (T-node)


2) Find all the neighours of the T-node and make them tentative.
3) Now examine all these neigbour.
4) Then among all these node, label one node as permanent (i.e., node with the
lowest weight would be labeled as permanent) and mark it as the T-node.
5) If, the desination node is reached or tentative list is empty then stop, else go to
step 2.

An example of Dijkstra routing algorithm is explained in Figure 2: In this example,


we want to find the best route between A and E. Here, we will show permanent nodes
with filled circles and T-nodes with the --> symbol.

1) As shown in the Figure below, the source node (A) has been chosen as T-node,
and so its label is permanent.

2) In this step, as you see B, C are the tentative nodes directly linked to
T-node (A). Among these nodes, since B has less weight, it has been chosen as
T-node and its label has changed to permanent.

25
Network Layer

3) In this step, as you see D, E are the tentative nodes directly linked to T-node(B).
Among these nodes, since D has less weight, it has been chosen as T-node and
its label has changed to permanent.

4) In this step, as you see C, E are the tentative nodes directly linked to T-node(D).
Among these nodes, since E has less weight, it has been chosen as T-node and
its label has changed to permanent.

5) E is the destination node. Now, since the destination node (E) has been, reached
so, we stop here, and the shortest path is A –B –D –E.

Figure 2: Dijkstra’s algorithm to compute the shortest path routing

2.4 DISTANCE VECTOR ROUTNIG

Nowadays, computer networks generally use dynamic routing algorithms rather than
the static ones described above because, static algorithms do not take the current
network load into account. Distance vector routing and link state routing are two
main dynamic algorithms. In this section, we will go through the distance vector
routing algorithm. It is also known as Belman-Ford routing algorithm.

26
Bellman-Ford Algorithm Routing Algorithms

The Bellman-Ford algorithm can be stated as follows: Find the shortest paths from a
given source node subject keeping in mind the constraint that, the paths contain at
most one link; then, find the shortest paths, keeping in mind a contraint of paths of at
most two links, and so on. This algorithm also proceeds in stages. The description of
the algorighm is given below.

s = source node

w(i, j) = link cost from node i to node j; w(i, j) = ∞ if the two nodes are not directly
connected; w(i, j) ≥ 0 if the two nodes are directly connected.

h = maximum number of links in a path at the current stage of the algorithm

Lh(n) = cost of the least-cost path from node s to node n under the constraint of no
more than h links

1. [Initialisation]

L0(n) = ∞, for all n ≠ s


Lh(s) = 0, for all h

2. [Update]

For each successive h ≥ 0:


For each n ≠ s, compute

min
Lh + 1(n) = [Lh (j) = w(j, n)]
j

Connect n with the predecessor node j that achieves the minimum, and eliminate any
connection of n with a different predecessor node formed during an earlier iteration.
The path from s to n terminates with the link from j to n.

For the iteration of step 2 with h = K, and for each destination node n, the algorithm
compares potential paths from s to n of length K + 1 with the path that existed at the
end of the previous iteration. If the previous, shorter path has less cost, then that path
is retained. Otherwise a new path with length K + 1 is defined from s to n; this path
consists of a path of length K from s to some node j, plus a direct hop from node j to
node n. In this case, the path from s to j that is used is the K-hop path for j defined in
the previous iteration.

Table 1 shows the result of applying this algorithm to a public switched network,
using s = 1. At each step, the least-cost paths with a maximum number of links equal
to h are found. After the final iteration, the least-cost path to each node and the cost of
that path has been developed. The same procedure can be used with node 2 as the
source node, and so on. Students should apply Dijkstra’s algorithm to this subnet and
observe that the result will eventually be the same.

27
Network Layer (a)
Link between
two routers
8

3
5 5 Router
R2 R3

R6
8
2
1
3 2
2
3 1 4

1
R1 1
R5
R4
7 1

Figure 3: Public switched network

(b) Bellman-Ford Algorithm (s = 1)

Table 1: Link Cost (Ref. Source [3])

H Lh(2) Path Lh(3) Path Lh(4) Path Lh(5) Path Lh(6) Path
0 ∞ — ∞ — ∞ — ∞ — ∞ —
1 2 1-2 5 1-3 1 1-4 ∞ — ∞ —
2 2 1-2 4 1-4-3 1 1-4 2 1-4-5 10 1-3-6
3 2 1-2 3 1-4-5-3 1 1-4 2 1-4-5 4 1-4-5-6
4 2 1-2 3 1-4-5-3 1 1-4 2 1-4-5 4 1-4-5-6

2.4.1 Comparison

Now, let us compare the two algorithms in terms of what information is required by
each node to find out the optional path in the Bellman-Ford algorithm. In step 2, the
calculation for node n involves knowledge of the link cost to all neighboruing nodes
to node n [i.e., w( j., w( j, n)] plus the total path cost to each of those neighbouring
nodes from a particular source node s (i.e., Lh(j)]. Each node can maintain a set of
costs and associated paths for every other node in the network and, exchange this
information with its direct neighbours from time to time. Each node can therefore, use
the expression in step 2 of the Bellman-Ford algorithm, based only on information
from its neighbours and knowledge of its link costs, to update it costs and paths. On
the other hand, consider Dijkstra’s algorithm. Step 3, it appears, required that each
node have complete topological information about the network. That is, each node
must know the link costs of all links in the network. Thus, for this algorithm,
information must be exchanged with all other nodes.

In general, an evaluation of the relative merits of the two algorithms should consider
the processing time of the algorithms and the amount of information that must be
collected from other nodes in the network or internet. The evaluation will depend on
the implementation appraoch and the specific implementation.

28
A final point: Both algorithm are known to converge under static conditions of Routing Algorithms
topology, and link costs and will converge to the same solution. If the link costs
change over time the algorithm will attempt to catch up with these changes. However,
if the link cost depends on traffic, which in turn depends on the routes chosen, then a
feedback condition exists, that could result in instablities.
2.4.2 The Count-to-Infinity Problem
One of the serious drawbacks of the Bellman-Food algorithm is that it quickly
responds to a path will a shorter delay but, responds slowly to a path with a longer
delay. This is also known as count to infinity problem. Consider a subnet in which a
router, whose best route to destination X is large. If, on the next exchange neighbour,
A suddenly reports a short delay to X, the router just switches over to using line A to
send traffic to X. In one vector exchange, the good news is processed.
To see how fast good news propagates, consider the five-node (linear) subnet of the
following figure. (Figure 4), where the delay metric is the number of hops. In the
Figure 4 (a) there are five routers Ra, Rb, Rc, Rd and Re linked to each other linearly.
Suppose, a router Ra is down initially and all the other routers know this. In other
words, they have all recorded the delay to Ra as infinity.

Linear subnet
Ra Rb Rc Rd Re

— — — — Initial distance
1 — — — After 1 exchange of message
1 2 — — After 2 exchanges of message
1 2 3 — After 3 exchanges of message
1 2 3 4 After 4 exchanges of message

Rb’s table Rc’s table Rd’s table Re’s table


(a)

Ra Rb Rc Rd Re Linear subnet
having 5 routers

1 2 3 4 Initial distance
3 2 3 4 After 1 exchange of message
3 4 3 4 After 2 exchanges of message
5 4 5 4 After 3 exchanges of message
5 6 5 6 After 4 exchanges of message
7 6 7 6 After 5 exchanges of message
7 8 7 8 After 6 exchanges of message

— — — —

Rb’s table
Rd’s table
Rc’s table Re’s table
(b)
a

Figure 4: Count to infinity problem

29
Network Layer We will describe this problem in the following stages: (i) when router Ra is up, and
(ii) when router Ra is down. Now let us take the first stage. When Ra is up, the other
routers in the subnet learn about it via the information (vector) exchanges. At the time
of the first exchange, Rb learns that its left neighbour has zero delay to Ra. Rb now
makes an entry in its routing table that Ra is just one hop away to the left. All the
other routers still think that Ra is down. At this point, the routing table entries for Ra
are as shown in the second row of Figure 4(b). On the next exchange, Rc learns that
Rb has a path of length 1 to A, so it updates its routing table to indicate a path of
length 2, but Rd and Re do not hear the good news until later. Clearly, the good news
is spreading at the rate of one hop per exchange. In a subnet whose longest path is of
length N hops, within N exchanges everyone will know about the newly-revived lines
and routers.

Now, let us consider the second stage Figure 4(b), in which all the lines and routers
are intially up. Routers Rb,Rc,Rd and Re are at a distance of 1,2,3 and 4 from A.
Suddenly, A goes down, or alternatively, the line between A and B is cut, which is
effectively the same thing from B’s point of view.

At the first packet exchange, Rb does not hear anything from Ra. Fortunately, Rc says:
Do not worry; I have a path to A of length 2. Little does B know that C’s path runs
through Rb itself. For all Rb knows, Rc might have ten lines all with separate paths to
Ra of length 2. As a result, Rb thinks it can reach Ra via Rc, with a path length of 3.
Rd and Re do not update their entries on the first exchange.

On the second exchange, C notices that each of its neighbours are claiming a path to
Ra of length 3. It picks one of them at random and makes its new distance to Ra 4, as
shown in the third row of Figure 4(b). Subsequent exchanges produce the history
shown in the rest of Figure 4(b).

Form Figure 4, it should be clear why bad news travels slowly: no router ever has a
value higher than the minimum of all its neighbours. Gradually, all routers work their
way up to infinity, but the number of exchanges required depends on the numerical
value used for infinity. For this reason, it is wise to set infinity to the longest path plus
1. If the metric time is delay, there is no well-defined upper bound, so a high value is
needed to prevent a path with a long delay from being treated as down. Not entirely
surprisingly, this problem is known as the count-to-infinity problem. The core of the
problem is that when X tells Y that it has a path somewhere, Y has no way of knowing
whether it itself is on the path.

2.5 LINK STATE ROUTING

As explained above distance vector routing algorithm has a number of problems like
count to infinity problem. For these reasons, it was replaced by a new algorithm,
known as the link state routing.

Link state routing protocols are like a road map. A link state router cannot be fooled
as easily into making bad routing decisions, because it has a complete picture of the
network. The reason is that, unlike approximation approach of distance vector, link
state routers have first hand information from all their peer routers. Each router
originates information about itself, its directly connected links, and the state of those
links. This information is passed around from router to router, each router making a
copy of it, but never changing it. Link-state involves each router building up the
complete topology of the entire network (or at least of the partition on which the
router is situated), thus, each router contains the same information. With this method,
routers only send information to of all the other routers when there is a change in the

30
topology of the network. The ultimate objective is that every router should have Routing Algorithms
identical information about the network, and each router should be able to calculate its
own best path independently. Independently calculate its own best paths.
In contrast to the distance-vector routing protocol, which works by sharing its
knowledge of the entire network with its neighbours, link-state routing works by
having the routers inform every router in the network about its nearest neighbours.
The entire routing table is not distributed any router but, the part of the table
containing its neighbours is:
Link-state is also known as shortest path first.
Link State Packet
When a router floods the network with information about its neighbourhood, it is said
to be advertising. The basis of this advertising is a short packet called a link state
packet (LSP). An LSP usually contains four fields: the ID of the advertiser, the ID of
the destination network, the cost, and the ID of the neighbour router. The structure of
a LSP is shown in Table 2.
Table 2: Link state packet (LSP)

Advertiser DD Network DD Cost Neighbour DD


……………… ……………… ……………… ………………
……………… ……………… ……………… ………………

How Link State Routing Operates


The idea behind link state routing is simple and can be stated in five parts as
suggested by Tanenbaum [Ref.1]. Each router must do the following:

1) Neighbour discovery

The Router has to discover its neighbours and learn their network addresses. As a
router is booted, its first task is to learn who its neighbours are.

The Router does this by sending a special HELLO packet on each point-to-point line.
The router on the other end is expected to send a reply disclosing its identity. These
names must be globally unique. If two or more routers are connected by a LAN, the
situation becomes slightly more complicated. One way of modeling the LAN is to
consider it as a node itself. Please see reference [1] for further explanation through a
diagram.

2) Measure delay
Another job that a router needs to perform is to measure the delay or cost to each of its
neighbours. The most common way to determine this delay is to send over the line a
special ECHO packet that the other side is required to send back immediately. By
measuring the round-trip time and dividing it by two, the sending router can get a
reasonable estimate of the delay. For even better results, the test can be conducted
several times and the average used.
This method implicitly assumes that delays are symmetric, which may not always be
the case.
3) Building link state packets
After collecting the information needed for the exchange, the next step for each router
is to build a link state packet containing all the data. This packet starts with the

31
Network Layer identity of the sender, followed by a sequence number and age, and a list of
neighbours. For each neighbour, the delay to that neighbour is given.

As an example, let’s consider the subnet given in Figure 5 with delays shown as
labels on the lines. For this network, the corresponding link state packets for all six
routers are shown in the Table 3.

2 C
B
4

A 7 6 1
D E

3 4

Figure 5: A subnet for link state routing

Table 3: The link state packets (LSPs) for the subnet in figure

A B C D E F
Seq. Seq. Seq. Seq. Seq. Seq.
Age Age Age Age Age Age
B 4 C 2 B 2 A 7 D 1 A 3
D 7 A 4 F 6 E 1 C 4
F 3 F 4 D 6

Building the link state packets is easy. The hard part is determining when to build
them. One possibility, is to build them periodically, that is, at regular intervals.
Another possibility is to build them when some significant event occurs, such as a line
or neighbour going down or coming back up again or changing its properties
appreciably.

4) Distribute the packets

Let us describe the basic algorithm in distributing the link state packet. The
fundamental concept here is flooding to distribute the packets. But to keep the number
of packets flowing in the subnet under control, each packet contains a sequence
number that is incremented for each new packet delivered. When a new link state
packet arrives, it is checked against the list of packets already scene by a router. It is
discarded in case the packet is old; otherwise it is forwarded on all lines except the
one it arrived on. A router discards an obsolete packet (i.e., with a lower sequence) in
case it has seen the packet with a highest sequence number.

The age of a data packet is used to prevent corruption of the sequence number from
causing valid data to be ignored. The age field is decremented once per second by the
routers which forward the packet. When it hits zero it is discarded.
How often should data be exchanged?

32
5) Compute shortest path tree Routing Algorithms

After accumulating all link state packets, a router can construct the entire subnet graph
because every link is represented. In fact, every link is represented twice, once for
each direction. The two values can be averaged or used separately.

Now, an algorithm like Dijkstra’s algorithm can be run locally to construct the
shortest path to all possible destinations. The results of this algorithm can be installed
in the routing tables, and normal operation resumed.

Problems in Link State Routing

• In link state protocol, the memory required to store the data is proportional to k *
n, for n routers each with k neighbors and the time required to compute can also
be large.

• In it bad data e.g., data from routers in error will corrupt the computation.

 Check Your Progress 2


1) Which of the following statements are True or False.

(i) Distance vector routing is a static routing algorithm. T F


(ii) Dijkstra’s algorithms can be run locally in link state routing to construct
the shortest path. T F

2) Answer the following questions briefly.


(a) What are the problems with distance vector routing algorithm? .
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

(b) What is LSP?


…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

2.6 HIERARCHICAL ROUTING

As you see, in both link state and distance vector algorithms, every router has to save
some information about other routers. When the network size grows, the number of
routers in the network increases. Consequently, the size of routing tables increases, as
well, and routers cannot handle network traffic as efficiently. We use hierarchical
routing to overcome this problem. Let’s examine this subject with an example:

We use distance vector algorithms to find best routers between nodes. In the situation
depicted below in Figure 6, every node of the network has to save a routing table
with 17 records.

Here is a typical graph and routing table (Table 4) for A:

33
Network Layer Table 4: A’s Routing Table

Destination Line Weight


A --- ---
B B 1
C C 1
D B 2
E B 3
F B 3
G B 4
H B 5
I C 5
J C 6
K C 5
L C 4
M C 4
N C 3
O C 4
P C 2
Q C 3

Figure 6: Network graph


In hierarchical routing, routers are classified in groups known as regions (Figure 7).
Each router has only the information about the routers in its own region and has no
information about routers in other regions. So routers just save one record in their
table for every other region. In this example, we have classified our network into five
regions as shown below.
Table 5: A’s Routing table for Hierarchical routing

Destination Line Weight


A --- ---
B B 1
C C 1
Region 2 B 2
Region 3 C 2
Region 4 C 3
Region 5 C 4

34
Routing Algorithms

Figure 7: Hierarchical routing


If A wants to send packets to any router in region 2 (D, E, F or G), it sends them to B,
and so on. As you can see, in this type of routing, the tables sizes are reduced so,
network efficiency improves. The above example shows two-level hierarchical
routing. We can also use three or four level hierarchical routing as well.

In three-level hierarchical routing, the network is classified into a number of clusters.


Each cluster is made up of a number of regions, and each region contains a number of
routers. Hierarchical routing is widely used in Internet routing and makes use of
several routing protocols.

2.7 BROADCAST ROUTING

Up to now we were discussing about sending message from a source to a destination.


Sometimes a host needs to send messages all other hosts. This type of transmission
i.e., to send a message to all destinations simultaneously is known as broadcasting.
There are a no. of methods for broadcasting. These are:

• Send a distinct packet to each destination


This is a very simple method, in which a source sends a distinct packet to each
destination. Major disadvantages of this method are:
a) It wastes bandwidth.
b) In this method source needs to have a complete list of all destinations.
Because of this reason this method is the least desirable of the other methods.
• Flooding
This is also a very simple method of broadcasting. In this method every incoming
packet is sent out on every outgoing line except the line from which it arrived. This
algorithm is very simple to implement. But the major disadvantage of this algorithm is
that it generates lots of redundant packets, thus consumes too much bandwidth.
• Multidestination routing
In this method each packet contains either a list of destinations or a bit map indicating
the desired destinations. When a packet arrives at a router, router determines the set of
output lines that would be required by checking all the destinations. Router only
chooses those output lines which are the best route to at least one of the destinations.
After this router creates a new copy of the packet for each output line to be used and

35
Network Layer in each packet it includes only those destinations that are to use the line. Therefore,
the destination set is partitioned among the output lines. In this, after a sufficient
number of hops, each packet will carry only one destination and can be treated as a
normal packet.

• Using a spanning tree


A spanning tree is a subset of graph that includes all the nodes (of graph) but contains
no loops. This method uses the spanning tree, therefore, each router knows which of
its lines belong to the spanning tree. When a packet arrives at a router, it copies onto
all the spanning tree lines except the one it arrived on.

Advantage of this method is that it makes excellent use of bandwidth and generates
only the minimum number of packets required to do the job.

In this method each router must have knowledge of some spanning tree. Sometimes
this information is available (e.g., with link state routing) but sometimes it is not (e.g.,
with distance vector routing), this is the major disadvantage of this method.

• Reverse path forwarding


Our last broadcast algorithm is an attempt to approximate the behaviour of the
previous one, even when the routers do not know anything at all about spanning trees.
The idea, called reverse path forwarding, is remarkably simple once it has been
pointed out.

In this method, when a broadcast packet arrives at a router, the router checks whether
the packet arrived on the line that is normally used for sending packets to the source
of the broadcast or not.

If the packet arrived on the line that is normally used for sending packets to the source
of the broadcast then

Router forwards copies of it onto all lines except the one it arrived on.

Else (i.e., packet arrived on a line other than the preferred one for reaching the source)

Router discards the packet as a likely duplicate.

2.8 MULTICAST ROUTING

In many cases, you need to send same data to multiple clients at the same time. In this
case, if, we use unicasting then the server will connect to each of its clients again and
again, but each time it will send an identical data stream to each client. This is a waste
of both server and network capacity. If, we use broadcasting in this case, it would be
inefficient because sometimes receivers are not interested in the message but they
receive it nonetheless, or sometimes they are interested but are not supposed to see the
message.

In such cases i.e., for sending a message to a group of users (clients), we use another
technique known as multicasting. The routing algorithm used for multicasting, is
called multicast routing.

Group management is the heart of multicasting. For group management, we require


some methods to create and destroy a group and to allow processes to join and leave a
group. When a router joins a group, it informs its host of this fact. For routing, routers

36
mainly want to know which of their hosts belong to which group. For this, either the Routing Algorithms
host must inform their router about changes in the group membership, or routers must
query their hosts periodically. On receiving this information, routers tell their
neighbours, so the informations propagated through the subnet.

Now, we will learn the working of multicasting through an example. In our example
(as shown in Figure 8), we have taken a network containing two groups i.e., group 1
and 2. Here, some routers are attached to hosts that belong to only one of these groups
and some routers are attached to hosts that belong to both of these groups.

2 1,2

1
2

1,2

Figure 8: Network containing two groups i.e., 1 and 2

To do multicast routing, first, each router computes a spanning tree covering all other
routers. For example, Figure 9 shows spanning tree for the leftmost router.

2 1,2

1
2

1,2

Figure 9: Spanning tree for the leftmost router

Now, when a process sends a multicast packet to a group, the first router examines its
spanning tree and prunes it. Pruning is the task of removing all lines that do not lead
to hosts that are members of the group. For example, Fig. 10 shows the pruned
spanning tree for group 1 and Fig. 11 shows the pruned spanning tree for group 2.
There are a number of ways of pruning the spanning tree. One of the simplest ones
that can be used, if link state routing is used and each router is aware of the complete
topology, including the hosts that belong to those groups. Then, the spanning tree can
be pruned, starting at the end of each path, working toward the root, and removing all
routers that do not belong to the group under consideration. With distance vector
routing, a different pruning strategy can be followed. The basic algorithm is reverse
path forwarding. However, whenever a router with no hosts interested in a particular
group and no connections to other routers, receives a multicast message for that group,

37
Network Layer it responds with a PRUNE message, thus, telling the sender not to send it any more
multicasts for that group. When a router with no group members among its own hosts
has received such a message on its lines, it, too, can respond with a PRUNE message.
In this way, the subnet is recursively pruned.

Figure 10: A pruned spanning multicast tree for group 1

2 2

Figure 11: A pruned spanning multicast tree for group 2

After pruning, multicast packets are forwarded only along the appropriate spanning
tree. This algorithm needs to store separate pruned spanning tree for each member of
each group. Therefore, this would not be good for large networks.

 Check Your Progress 3


1) Which of the following statements are True or False.

(a) In hierarchical routing, each router has no information about routers in


other regions. T F
(b) A spanning tree is a subset of a graph that includes some of the nodes of
that graph. T F
(c) Sending a message to a group of users in a network is known as
broadcasting. T F

38
2) Answer the following questions in brief. Routing Algorithms

a) Explain reverse path forwarding in brief.


….....……….…………………………………………………………………
……….…………………………………………………………………….…
……………………………………………………………………..…………
b) What is Pruning?
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

2.9 SUMMARY

In this unit, we first studied different routing algorithms. First, we looked at finding
the route between a given pair of routers. The algorithm finds the shortest path
between them on the graph. A number of algorithms for computing the shortest path
between two nodes of a graph are known. Here, we have studied the Dijkstra
algorithm.

Next, we studied flooding. In flooding, every incoming packet is sent out on every
outgoing line except the line from which it arrived. This algorithm is very simple to
implement, but it generates lots of redundant packets. It discovers all routes, including
the optimal one, therefore this is robust and gives high performance.

Next, we studied the Belman-Ford routing algorithm. In this algorithm each host
maintains a routing table. This routing table has an entry for every other router in the
subnet. These tables are updated by exchanging information with the neighbours.

Next, we studied the link state routing algorithm. In this algorithm, each router
originates information about itself, its directly connected links, and the state of those
links. This information is passed around from router to router, each router making a
copy of it, but never changing it. The ultimate objective is that every router has
identical information about the network, and each router will independently calculate
its own best paths.

Next, we discussed hierarchical routing algorithm. In hierarchical routing, routers are


classified in groups known as regions. Each router has only the information about the
routers in its own region and has no information about routers in other regions.

Next, we studied broadcasting i.e., to send a message to all destinations in a network


simultaneously. There are a number of methods for broadcasting such as, flooding,
muitidestination routing, reverse path forwarding etc. In this unit we discussed all
these methods in brief.

Finally, we discussed multicasting i.e., to send a message to a group of users in a


network. In multicasting, each router first computes a spanning tree covering all other
router. Now, when a process sends a multicast packet to a group, the first router
examines its spanning tree and prunes it. Then, packets are forwarded only along the
appropriate spanning tree.

39
Network Layer
2.10 SOLUTIONS/ANSWERS
Check Your Progress 1

1) (i) True
(ii) True
(iii) False

2) (a) A variation of flooding that is selective flooding. In this algorithm the


routers do not send every incoming packet out on every line, only on those
lines that are moving approximately in the right direction.

Check Your Progress 2

1) (i) False
(ii) True

2) (a) Problems with distance vector routing algorithm are: it uses only
approximation, neighbours, slowly increase their path length to a dead node,
and the condition of being dead (infinite distance) is reached by counting to
infinity, one at a time.

(b) In link state routing, a router floods the network with information about
3 of its neighbours, by using a short packet, known as link state packet
(LSP). An LSP usually contains four fields: the ID of the advertiser, the ID
of the destination network, the cost, and the ID of the neighbour router.

Advertiser Network Cost Neighbour


……………… ……………… ……………… ………………
……………… ……………… ……………… ………………

Link State Packet (LSP)

Check Your Progress 3

1) (a) True
(b) False
(c) False

2) (a) Reverse path forwarding is a method of broadcasting. In this method,


when a broadcast packet arrives at a router, the router checks whether the
packet has arrived on the line that is normally used for sending packets to
the source of the broadcast or not. If the packet has arrived on the line that is
normally used for sending packets to the source of the broadcast then, the
Router forwards copies of it onto all lines except the one it has arrived on.
Else (i.e., packet arrived on a line other than the preferred one for reaching
the source). The router discards the packet as a likely duplicate.

(b) In multicasting, pruning is the task of removing all lines from the spanning
tree of a router that do not lead to hosts that are members of a
particular group.

40
Routing Algorithms
2.11 FURTHER READINGS

1) Computer Network, S. Tanenbaum, 4th edition, Prentice Hall of India,


New Delhi 2002.

2) Data Network, Drnitri Berteskas and Robert Galleger, Second edition,


Prentice Hall of India, 1997, New Delhi.

3) Data and Computer Communication, William Stalling, Pearson Education,


2nd Edition, Delhi.

41
Network Layer
UNIT 3 CONGESTION CONTROL IN
PUBLIC SWITCHED NETWORK
Structure Page Nos.

3.0 Introduction 42
3.1 Objectives 43
3.2 Reasons for Congestion in the network 43
3.3 Congestion Control vs. Flow Control 43
3.4 Congestion Prevention Mechanism 44
3.5 General Principles of Congestion Control 45
3.6 Open Loop Control 46
3.6.1 Admission Control
3.6.2 Traffic Policing and its Implementation
3.6.3 Traffic Shaping and its Implementation
3.6.4 Difference between Leaky Bucket Traffic Shaper and
Token Bucket Traffic Shaper
3.7 Congestion Control in Packet-switched Networks 49
3.8 Summary 50
3.9 Solutions/Answers 50
3.10 Further Readings 51

3.0 INTRODUCTION

Congestion occurs when the number of packets being transmitted through the public
switched networks approaches the packet handling capacity of the network. When the
number of packets dumped into the subnet by the hosts, is within its carrying capacity,
all packets are delivered (except for a few that are afflicted with transmission errors).
The networks get overloaded and start dropping the packet, which leads to congestion
in the network. Due to the dropping of packets, the destination machine may demand
the sources to retransmit the packet, which will result in more packets in the network
and this leads to further congestion. The net result is that the throughput of the
network will be very low as, illustrated in the Figure 1.

delivered
Packets delivered (throughput)

no congestion

congestion here

dropping of packets

Load (Offered packets)

Figure 1: Congestion in network

42
Congestion Control in Public
3.1 OBJECTIVES Switched Network

After going through this unit, you should be able to:


• define congestion;
• list the factors for the occurrence of congestion in the Network;
• differentiate between congestion control and flow control;
• outline the general principles of congestion Control, and
• discuss congestion prevention mechanism.

3.2 REASONS FOR CONGESTION IN THE


NETWORK
Tanenbaum [Ref 1] has mentioned several reasons for the occurrence of congestion
in the network:
1) Sudden arrival of a packet on a particular output line from multiple input
source, leading to the formation of a queue and dropping of a packet in case of
an insufficient memory in a router to hold these packets. Nagle has pointed out
that, even if, there is a huge memory at the router level, there is no improvement
in congestion, rather, it get worse because of time out of the packet.
2) Slow processors can also cause congestion.
3) Low bandwidth channels can also cause congestion. If, we increase the
bandwidth without a fast processor or vice-versa, will be of little help.

3.3 CONGESTION CONTROL Vs FLOW


CONTROL
The differences between congestion and flow control are whether mentioning as they
are related terms. There is another term, related to these two, called error control.
Congestion Control ensures that the subnet is able to carry the offered traffic and that
it is not overloaded. Flow control has to do with ensuring that the fast sender does not
overload packets from the noisy channels receiver. Error control deals with
algorithms that recover or conceal the effects from packet losses. The goal of each of
these control mechanisms are different, but, the implementation can be combined.
The following Table differentiates between Congestion Control and Flow Control:

Table 1: Congestion Control vs. Flow Control

Congestion Control Flow Control


Congestion Control is needed when buffers Flow Control is needed when, the buffers at
in packet switches overflow or cause the receiver are not depleted as fast as the
congestion. data arrives.
Congestion is end-to-end, it includes all Flow is between one data sender and one
hosts, links and routers. It is a global issue. receiver. It can be done on link-to-link or
end-to-end basis. It is a local issue.
If viewed as a “congested buffer”, then the If there is a queue on the input side of a
switch tells the source of the data stream to switch, and link-by-link flow control is
slow down using congestion control used, then as a “flow control” action the
notifications. When output buffers at a switch tells its immediate neighbour to
switch fill up and packets are dropped this slow down if the input queue fills up.
leads to congestion control actions.

43
Network Layer The reason for comparing congestion and flow control is that, some congestion
control algorithms operate by sending messages back to the senders to slow down
incase, the network is congested. In case, the receiver is overloaded, the host will get a
similar message to slow down.

 Check Your Progress 1


1) Differentiate between Congestion Control and Flow Control.
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
2) What are the differences between open loop and closed loop solutions
to congestion?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

3.4 CONGESTION PREVENTION MECHANISM

The purpose of this section is to examine various mechanisms used at the different
layers to achieve different goals. But, from the congestion control point of view, these
mechanism are not very effective. We will start with data link layer first. Go back N
and Selection Repeat are flow control mechanism available at the data link layer.
Incase, there is any error with the packet, Go back N retransmits all the packets up to
the packet where the error occurred. For example, there is an error in 5th packet, it
means that it will retransmit 1, 2, 3, 4, and 5 which create extra load on the network,
thereby, leading to congestion. Selective repeat retransmits only that packet. With
respect to congestion control, selective repeat is clearly better than go back N.

The following table provides the detail:

Table 2: Mechanism Affecting Congestion

Layer Mechanisms Solution


Transport • Sliding window with credit schemes Similar to data link layer
• Piggybacking problem
• Timeout determination +
Optimal time out
determination
Network • Bad Routing algorithm A good routing algorithm
• Packet lifetime management +
Optimal lifetime manager
Data link • Go back N, Selective Repeat Selective Repeat
layer • Piggybacking +
Small window size.

Acknowledgement mechanism also affects congestion. If, each packet is


acknowledged immediately then it will generate extra traffic. However, if
acknowledgement is piggybacked onto reverse traffic along, extra timeouts and
retransmissions may happen which will cause congestion. A flow control scheme
with a small window size reduces the data rate and thus, helps reduce congestion.

44
At the network good routing algorithm can help avoid congestion by distributing the Congestion Control in Public
traffic over all the lines, whereas a bad one can send too much traffic over already Switched Network
congested lines. Finally, packet lifetime management deals with the duration a packet
may live before being discarded. If, it is too long, lost packets may reside in the
network a long time, but if, it is too short, packets may sometimes time out before
reaching their destination, thus, inducing retransmissions. Therefore, we require good
routing algorithm and optimal packet life time.
Between transport and data link layer there are common issues (Flow Control,
acknowledge mechanism) with respect to Congestion Control mechanism but at the
transport layer, the extra problem is related to determining time out internal across the
network, which is a difficult problem.

3.5 GENERAL PRINCIPLES OF CONGESTION


CONTROL

From our earlier discussions it appears that congestion problem is not very easy to
solve. Typically, the solution depends on the type of requirements for the application
(e.g., Qo S, high bandwidth, fast processing). Like routing algorithm, congestion
control algorithms have been classified into two types:

• Open Loop
• Closed Loop.

Open loop solutions attempt to solve the problem with a good design, that ensures
that congestion does not occur in the network. Once having network is running
midcourse corrections are not made. Open loop algorithm works on two basic
mechanisms: i) Admission Control ii) Resource Reservation. Admission control is
a function performed by a network to accept or reject traffic flow. Therefore, the
purpose of open loop solution is to ensure that the traffic generated by the source will
not lower the performance of network below the specified Qo S. The network will
accept traffic till QoS parameters are satisfied otherwise, it rejects the traffic.
In contrast, closed loop solutions are based on the concept of a feedback loop. These
algorithms are called closed loop because the state of the network has to be fed up to
the source that regulates traffic. Closed loop algorithms follow dynamic approach to
the solution of congestion problems. It reacts during the congestion occurrence period
or when the congestion is about to happen. A variety of metrices have been proposed
to monitor the state of a subnet in order to observe congestion. These are:

• Queue length
• The number of retransmitted packets due to timeout
• Percentage of rejected packets due to shortage of the router’s memory
• Average packet delay.
To prevent congestion from occurring, this mechanism monitors the system to detect
when and where congestion occurs, pass is this information to places where action can
be taken usually at the source and finally adjusts the systems operation to correct the
problem.
The presence of congestion means that the offered load in the network is (temporarily)
greater than the resources (routers) can handle. Two straight forward solutions are
to increase the resources or decrease the load. To increase the resources the
following mechanism may be used as suggested in Tanenbaum [Ref 1].

45
Network Layer i) Higher bandwidth may be achieved by increasing transmission power of a satellite.

ii) Splitting traffic on multiple routers instead of a single best rout.

iii) Use of temporary dial-up telephone line between certain points.

However, sometimes it is not possible to increase the capacity or it has already been
increased to the limit. But, if the network has reached the maximum limit, the
alternative is to reduce the load. The following mechanism may be used (i) Denying
service to some users (ii) Degrading services to some or all users.

For subnets that use virtual circuits internally, these methods can be used at the
network layer. In the next section, we will focus on their use in the network layer. We
will also discuss the open loop control mechanism in detail. The closed loop
mechanism will be discussed in Block 4 Unit 2 as a part of TCP Protocol.

3.6 OPEN LOOP CONTROL

As mentioned earlier Open Loop Control algorithm prevent the occurrence of from
Congestion occurrence rather than dealing with it after it has occurred. It does not rely
on feedback information to regulate the traffic. Thus, this technique is based on the
assumption that once the packets are accepted from a particular source, the network
will not get overloaded. In this section, we will examine several such techniques and
its implementation. Learners are requested to Leon Garcia’s book [Ref. 2].

3.6.1 Admission Control


This is a widely used technique in virtual circuit network. The technique is very
simple and straight forward. Once congestion has occurred, no more virtual circuits
are to be set up. This is similar to a telephone system in which there is no dial tone in
case the source gets overloaded. When a source wants to establish a connection,
admission control mechanism examine Q0S parameters of the connections initiated by
the source. If it is OK, the connection (VC) is established otherwise, it is rejected. In
order to initiate a connection setup, a source specifies its traffic flow indicating a set
of parameters called traffic descriptor, which includes peak rate, average rate, and
maximum traffic burst size. (Maximum length of time the traffic is generated at the
peak rate) and so on. Based on the characteristic of traffic flow, admission control
mechanism reserves the bandwidth (which usually lies between peak rate and average
rate).

3.6.2 Traffic Policing and its Implementation


Leon Garcia [Ref 2] has defined traffic policing as the process of monitoring and
enforcing the traffic flow of packets during the connection period. The network
may drop or mark the packet as noncompliant and give low priority to traffic in case,
it does not obey the agreed upon parameters values during the initial connection setup
phase. Most implementations of traffic policing is done through the Leaky Bucket
algorithm. In this algorithm the bucket is considered as a network and traffic flow is
considered as water being poured into a bucket. The following assumption are made:

• The bucket has certain depth to hold water just like a network can accept a
certain number of packets.

• The bucket leaks at a certain rate (if there is water is the bucket) no matter at
what rate water enters the bucket. In terms of computer networks, it should be
interpreted as follows: No matter at what rate the packets arrive at the input
lines of a routers, routers in a subnet passes to its outgoing link at a fixed rate.

46
• If the bucket does not overflow when the water is poured into the bucket, then Congestion Control in Public
the bucket of water is said to be conforming. In terms of network, if the traffic Switched Network
is within the agreed norms, all packets will be transferred.

• The bucket will spillover if, it is full and if, additional water is poured into it, if
it gets more packets than it can handle, it will lead to congestion and then the
network due to which the additional packets will be lost.

If, we expect the traffic flow to be very smooth, then the bucket has to be of a shallow
type. In case, the flow is bursty in nature, the bucket should be deeper. In summary,
what we want to observe is whether the outflow of packets corresponds to the arrival
rate of packets or not? Implementation of a leaky bucket is similar to queue data
structure implementation. When a packet arrives, if there is space left in the queue, it
gets appended to the queue otherwise, it gets rejected.

3.6.3 Traffic Shaping and its Implementation

Leon Garcia [Ref 2] has defined traffic shaping as the process of altering traffic flow
to another flow. It is mainly used for smoothening the traffic. Consider an example,
where a host is generating data at 30 kbps, which, it can transmit to the network in
several ways (as shown in the following Figure 2).

• It can transmit at the rate of 100 kbps for 0.3 Second


• It can transmit at the rate of 75 kbps for 0.4 Second

30 Kbps

(a) Time

75 Kbps 75 Kbps 75 Kbps 75 Kbps

Time

100 Kbps

Time
Figure 2: Possible traffic patterns at the average rate of 30 Kbps

You can make observation from the Figure 2 that the Figure 2 (a) shows the
smoothened pattern will create less stress on the network but the destination machine
may not want to wait for 1 Second to retrieve 30 kbps data at each period. Now, we
will look at its implementation. There are two mechanism:

• Leaky bucket traffic Shaper


• Token bucket traffic Shaper.

Leaky Bucket Traffic Shaper: It is a very simple mechanism in which data stored at
the senders buffer, is passed on at a constant interval to smoothen traffic as shown

47
Network Layer in Figure 3. The buffer is used to store bursty traffic. This size defines the maximum
burst that can be accommodated.

Packet Buffer
Traffic Generated by an application Communication Channel
Server
Smoothened Traffic

Packet

Source: Ref.[2]
Figure 3: A leaky bucket traffic shaper

This mechanism is different from a leaky bucket algorithm which was used in traffic
policing. The bucket in traffic policing is just a counter whereas, a bucket is traffic
shaper is a buffer that stores the packets.
Token Bucket Traffic Shaper: The leaky bucket traffic shaper has a very restricted
approach. Since, the output pattern is always constant no matter how bursty traffic is.
Many applications produce variable rate traffic; sometimes bursty but sometimes
normal. If, such traffic is allowed to pass through a leaky bucket traffic shaper, it may
cause a very long delay. One such algorithms that deals with such situations is the
token bucket algorithm.
The following are the features of token bucket traffic shaper:
• Token is used here, as a permit to send a packet. Unless there is a token, no
packet can be transmitted.
• A Token bucket holds tokens which are generated periodically at a constant
rate.
• New tokens are discarded, incase, the token buckets are full.
• A packet can be transmitted only if, there is a token in the token buffer. For
example; in the Figure 4 there are two tokens in the token buffer and five
packets to be transmitted. Only two packets will be transmitted and the other
three will wait for tokens.
• Traffic burstiness is proportional to the size of a token bucket.

Periodic arrival of tokens

Token bucket

Token

Packet Buffer
Traffic generated by an application Communication channel
Server
Smoothened Traffic

Packet

Source: Ref.[2]
Figure 4: Token bucket traffic shaper

48
Now, let us discuss the operation of this algorithm; Congestion Control in Public
Switched Network
Just assume that the token bucket is empty and the numbers of packets have arrived in
the buffer. Since, there is no token in the token buffer, the packets have to wait until
the new packet is generated. Since, tokens are generated periodically, the packet will
be also transmitted periodically at the rate at which the tokens arrive. In, the next
section, we will compare between the leaky bucket traffic shaper and token bucket
traffic shaper. Students are also requested to see question 2 of Check Your Progress 2.

3.6.4 Difference Between Leaky Bucket Traffic Shaper and Token


Bucket Traffic Shaper

The following tables differentiate between two types of traffic shapers:

Table 3: Leaky Bucket Traffic Shaper and Token Bucket Traffic Shaper

Leaky Bucket Token Bucket


• Leaky Bucket (LB) discards • Token Bucket (TB) discards
packets. tokens.
• With LB, a packet can be • With TB, a packet can only be
transmitted if the bucket is not transmitted if, there are enough
full. tokens to cover its length in
• LB sends the packets at an bytes.
average rate. • TB allows for large bursts to
• LB does not allow saving, a be sent faster by speeding up
constant rate is maintained. the output.
• TB allows saving up of tokens
(permissions) to send large
bursts.

3.7 CONGESTION CONTROL IN PACKET-


SWITCHED NETWORKS
Several control mechanism for Congestion Control in packet switched network have
been explored and published. William Stalling [Ref 3] has presented the following
mechanism to handle congestion:

1) Send a control packet (choke packet) from a node where congestion has
occurred to some or all source nodes. This choke packet will have the effect of
stopping or slowing the rate of transmission from sources and therefore, it will
reduce total load on the network. This approach requires additional traffic on
the network during the period of congestion.

2) Make use of an end-to-end probe packet. Such a packet could be time stamped
to measure the delay between two particular endpoints. This has the
disadvantage of adding overhead to the network.

3) Allow packet-switching nodes to add congestion information to packets as they


go by. There are two possible approaches here. A node could add such
information to packets moving in the direction opposite to the congestion. This
information reaches the source node quickly, and this reduces the flow of
packets into the network. Alternatively, a node could add such information to
packets moving in the same direction as the congestion. The destination either
asks the source to adjust the load or returns the signal to the source in the
packets (or acknowledgements) moving in the reverse direction.

49
Network Layer  Check Your Progress 2
1) What are the different approaches to open loop control?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

2) What is the difference between leaky bucket traffic shaper and token bucket
traffic shaper?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

3.8 SUMMARY

In this unit, we examined several aspects of congestion i.e., what is Congestion? How
does it occur? We also differentiated between congestion control and flow control.
Then, we gave two broad classification of congestion control; open loop and closed
loop. At the end, we touched upon issues related to congestion control in packet
switched network.

3.9 SOLUTIONS/ANSWERS

Check Your Progress 1

1)

Congestion Control vs. Flow Control

Congestion Control is needed when buffers Flow Control is needed when the buffers at
in packet switches overflow or congest. the receiver are not depleted as fast as the
data arrives.

Congestion is end to end, it includes all Flow in between one data sender and one
hosts, links and routers. It is a global issue. receiver. It can be done on link-to-link or
end-to-end basis. It is a local issue.

2) The purpose of open loop solution is to ensure that the traffic generated by the
source will not lower the performance of the network below the specified Qo S.
The network will accept traffic till QoS parameters are satisfied, otherwise, it
rejects the packets.

In contrast, closed loop solutions are based on the concept of a feedback loop.
These algorithms are called closed loop because the state of the network has to
be fed, up to the source that regulates traffic. Closed loop algorithms follow
dynamic approach to solution of congestion problems. It reacts during the
congestion occurrence period or when the congestion is about to happen.

50
Check Your Progress 2 Congestion Control in Public
Switched Network
1) The following are the different approaches:
• Admission Control Mechanism
• Traffic Policing
• Traffic Shaping

2) (i) Token bucket algorithm is more flexible than leaky bucket traffic shaper
algorithm but both are used to regulate traffic.

(ii) The leaky bucket algorithm does not allow idle hosts to save up permission
to send large bursty packets later, whereas, token bucket algorithm allows
saving up to maximum size of the bucket N. This means that burst up to N
packet can be sent at once, allowing some burstiness in the output stream
and giving fast response to sudden bursts of output [Ref 1].

(iii) Token bucket algorithm throws away tokens (i.e., transmission capacity),
when the bucket fills up but never throws packets. In contrast, the leaky
bucket algorithm discards the packets when the bucket fills up.

3.10 FURTHER READINGS

1) Computer Networks, 4th Edition, A.S. Tanenbaum, Prentice Hall of India,


New Delhi.

2) Communication Networks fundamental concepts and key architecture, Leon


Garcia and Indra Widjaja, Tata McGraw Hill, New Delhi.

3) Data and Computer Communication, 6th edition, William Stallings, Pearson


Edition, New Delhi.

51
Network Layer

52
Congestion Control in Public
Switched Network

53
Network Layer
UNIT 4 INTERNETWORKING
Structure Page Nos.

4.0 Introduction 52
4.1 Objectives 52
4.2 Internetworking 52
4.2.1 How does a Network differ?
4.2.2 Networks Connecting Mechanisms
4.2.3 Tunneling and Encapsulation
4.3 Network Layer Protocols 55
4.3.1 IP Datagram Formats
4.3.2 Internet Control Message Protocol (ICMP)
4.3.3 OSPF: The Interior Gateway Routing Protocol
4.3.4 BGP: The Exterior Gateway Routing Protocol
4.4 Summary 68
4.5 Solutions/Answers 68
4.6 Further Readings 69

4.0 INTRODUCTION

There are many ways in which one network differs from another. Some of the
parameters in which a network differs from another network are packet length, quality
of services, error handling mechanisms, flow control mechanism, congestion control
mechanism, security issues and addressing mechanisms. Therefore, problems are
bound to occur when we require interconnection between two different networks.
Different mechanisms have been proposed to solve this problem: Tunneling is used
when the source and destination are the same type of network but, there is a different
network in-between. Fragmentation may be used for different maximum packet sizes
of different networks. The network layer has a large set of protocols besides IP. Some
of these are OSPF and BGP and ICMP. In this unit, we will discuss some of these
protocols as well as some internetworking mechanisms.

4.1 OBJECTIVES

After going through this unit, you should be able to:


• list how are network differs from another;
• list the components of a network layer;
• define tunneling and fragmentation concepts;
• discuss the field format of IP datagram, IP addressing;
• describe internet routing protocols, such OSPF and BGP, and
• introduce Internet Control Message Protocol (ICMP).

4.2 INTERNETWORKING
The Internet is comprised of several networks, each one with different protocols.
There are many reasons for having different networks (thus different protocols):

• Many personal computers use TCP/IP.


• Many larger business organisations still use IBM mainframes with SNA Protocol.

52
Internetworking
• Some PCs still run on Novell’s NCP/IPX or Appletalk.
• Wireless Network will have different protocols.
• A large number of Telecommunication companies provide ATM facilities.

In this section, we will examine some issues that arise when two or more networks are
interconnected. The purpose of interconnecting is to allow any node or any network
(e.g., Ethernet) to access data to any other node on any other network. (e.g., ATM).
Users should not be aware of the existence of multiple networks.

4.2.1 How does a Network differ?

Tanenbaum [Ref.1] has defined several features (Kindly refer to Table1) that
distinguishes one network from another. These differences have be resolved while
internetworking. All these features are defined at the network layer only, although, the
network differs at the other layers too. They might have different encoding techniques
at the physical layer, different frame formats at the data link layer, and different Q0 S
at the transport layer etc.

Table 1: Different type of Networks

Features Options
Types of services Connection-oriented, connection-less,
Protocols IP, IPX, SNA, ATM
Addressing Scheme Flat vs. Hierarchical
Maximum Packet size Different for each network
Flow Control Sliding window, Credit based
Congestion Control Mechanism Leaky bucket, Token bucket, Hop by
Hop, Choke Packets
Accounting By connect time, packet by packet, byte
by byte

4.2.2 Networks Connecting Mechanisms

We have been addressing the problems of connecting network in the earlier blocks
also. Let us revisit these topics again. Networks can be connected by the following
devices:

• Repeaters or Hubs can be used to connect networks at the physical layer. The
purpose of these devices is to move bits from one network to another network.
They are mainly analog devices and do not understand higher layer protocols.
• At data link layer, bridges and switches were introduced to connect multiple
LANs. They work at the frame level rather than bits level. They examine the
MAC address of frames and forward the frames to different LANs. They may
do little translation from one protocol (e.g., Token ring, Ethernet) to another
MAC Layer Protocol. Routers were used at the network layer, which also does
translation incase the network uses different network layer protocols.

Finally, the transport layer and application layer gateways deal with conversion of
protocols at the transport and application layer respectively, in order to interconnect
networks.

The focus of the section is to introduce mechanism internetworking at the network


layer. Therefore, we have to understand the difference between the switching that
operates at the data link layer and routing that operate at the network layer.

53
Network Layer The main difference between the two operations is that, with a switch, the entire frame
is forwarded to a different LAN on the basis of its MAC address. With a router, the
packet is extracted and encapsulated in a different kind of a frame and forwarded to a
remote router on the basis of the IP address in the packet. Switches need not
understand the network layer protocol to switch a packet, whereas, a router requires to
do so. Tanenbaum [Ref.1] has described two basic mechanisms of internetworking:
Concatenated virtual circuit and connectionless internetworking. In the next
sections we will talk about them.

Concatenated Virtual Circuit

This scheme is similar to the implementation of a connection-oriented service in


which, a connection from the source router to the destination router must be
established before any packet can be forwarded. This type of a connection is called a
virtual circuit, keeping with the analogy of the physical circuits set up by the
telephone system and the subnet is called a Virtual Circuit Subnet. The idea behind a
Virtual Circuit is to avoid choosing a new route for every packet to be forwarded. A
route selected as a part of the connection setup is stored in tables inside the routers.

The essential feature of this mechanism is that a series of Virtual Circuits is setup
from the source machine on one network to the destination machine on another
network through one or more gateways. Just like a router, each gateway maintains
tables, indicating the virtual circuits that are to be used for packet forwarding. This
scheme works when all the networks follow the same Q0S parameters. But, if only
some networks support reliable delivery service, then all the schemes will not work. In
summary, this scheme has the same advantage and disadvantage of a Virtual Circuit
within a subnet.

Datagram Model

Unlike the previous model there is no concept of virtual circuits, therefore, there is no
guarantee of packet delivery. Each packet contains the full source and destination
address and are forwarded independently. This strategy uses multiple routes and
therefore, achieve higher bandwidth.

A major disadvantage of this approach is that, it can be used over subnets that do not
use Virtual Circuit inside. For example, many LAN and some mobile networks
support this approach.

In summary, approach to internetworking is the same as datagram subnets:


Congestion proves, robustness in case of router failures.

4.2.3 Tunneling and Encapsulation

It is used when the source and destination networks are the same but the network,
which lies in-between, is different. It uses a mechanism called encapsulation where, a
data transfer unit of one protocol is enclosed inside a different kind of protocol.
Tunneling allows us to carry one kind of frame that uses a particular network but uses,
a different kind of frame.

Suppose two hosts located very far away from each other wants to communicate and
both have access to the Internet link. It means that both of them are running TCP/IP
based protocol. The carrier (WAN) which lies between the two hosts is based at X.25.
Its format is different from TCP/IP. Therefore, the IP datagram forwarded by the host
one will be encapsulated in X.25 network layer packet and will be transported to the
address of the router of the destination host, when it gets there. The destination router
removes the IP packet and sends it to host 2. WAN can be considered as a big tunnel

54
extending from one router to another [Ref.1]. The packet from host 1 travels from one Internetworking
end of a X.25 based tunnel to another end of the tunnel encapsulated properly.
Sending and receiving hosts are not concerned about the process. It is done by the
concerned router at the other end.

 Check Your Progress 1


1) List the important features in which one network differs from another.
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

2) What are the mechanisms for interconnecting networks?


……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

3) Where is tunneling used?


……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

4.3 NETWORK LAYER PROTOCOLS


In the network layer, the internet can be viewed as a collection of subnetworks or
Autonomous systems that are interconnected. End systems are not usually directly
attached to each other via a single communication link. Instead, they are directly
connected to each other through intermediate switching devices known as routers. A
router takes a chunk of information arriving on, one of its incoming links and
forwards that chunk of information on one of its outgoing communication channels.
Routing and communication links are a part of Internet service providers. To allow
communication among Internet users and to allow users to access worldwide internet
content, these lower tier ISPs are interconnected through national and international
upper tier ISPs. An upper tier ISP consists of high speed routers interconnected with
high speed fiber-optic channels [Ref.5]. Each ISP network whether upper tier or lower
tier, is managed independently and runs the IP Protocol which holds the whole
internet together and provides a best-efforts (not guaranteed) service to forward
information (datagrams) from source to destination without regards to where the
machines are located. In this section, we will introduce several network layer
protocols. The following Figure 1 shows the network layer with these major
components [Ref.5]:

(i) IP
(ii) ICMP
(iii) RIP, OSPF and BGP

55
Network Layer
Transport Layer

RIP IP
OSPF
BGP Network Layer

ICMP

Logical Link Control Sublayer Data Link Layer


MAC Sublayer

Physical Layer

Figure 1: Network layer protocols

(i) IP: The first component is IP Protocol, which defines the followings:
• Fields in IP datagram
• Address formats
• Action taken by routers and end systems on a IP datagram based on the
values in these fields.

(ii) ICMP: The Second Component of the network layer is ICMP (Internet Control
Message Protocol) used by the hosts, routers and gateways to communicate
network layer information to each other. The most typical use of ICMP is for error
reporting.

(iii) RIP, OSPF and BGP: The third component is related to routing protocols: RIP
and OSPF are used for Intra-AS routing, whereas, BGP is used as exterior
gateway routing protocol.

Now we will describe each component separately.

4.3.1 IP Datagram Formats

An IP datagram consists of a header part and a data part. The header has a 20-byte
fixed part and a variable length optional part as shown in the Figure2(a). The header
format is shown in Figure 2(b). It is transmitted in big-endian order: from left to
right, with the high-order bit of the Version field going first. On little endian
machines, software conversion is required on both the transmission header as well as
the reception header. The key fields in the IPv4 Internet Protocol version 4 datagram
header are the followings.

20-60 bytes

Header Data

Figure 2 (a) : IP datagram

56
Internetworking

32 Bits

Class Range of host


address

A 0 Network Host

B 10 Network Host

C 110 Network Host

D 1110 Network Multicast address

E 1111 Network Reserved of future use

Figure 2 (b): IP address formats

The Version field specifies the IP version of the protocol, the datagram belongs to. By
including the version in each datagram, the router can determine/interpret the
reminder of the IP datagram.

Header length (4 bits)

The field defines the length of the header in multiples of four bytes. The four bytes
can represent a number between 0 and 15, which when multiplied by 4, results in a 60
bytes. A typical IP datagram has 20 byte header only, because most IP datagram do
not contain options.

The Type of service (8 bits) defines how the datagram should be handled. It defines
bits to specify priority, reliability, delay, level of throughput to meet the different
requirements. For example, for digitised voice, fast delivery, beats accurate delivery.
For file transfer, error-free transmission is more important than fast transmission.

The Total length includes everything in the datagram both header and data. The
maximum length is 65,535 bytes. At present, this upper limit is tolerable, but with
future gigabit networks, larger datagrams may be needed.

The Identification filed is used in fragmentation. A datagram when passing through


different networks may be broken into several fragments to match the network frame
size. Once the fragmentation occurs, each fragment is identified with a sequence
number to this field. Surprisingly, the IPV6 does not permit the fragmentation of the
IP datagram at the routers level.

Next, comes an unused bit and then two 1-bit fields. DF stands for Don’t Fragment.
It is an order to the router not to fragment the datagram because, the destination is
incapable of putting the pieces back together again.

MF stands for More Fragments. All fragments except the last one have this bit set.
It is needed to know when all fragments of a datagram have arrived.

The Fragment offset (13 bits) depicts the location of that the current datagram, this
fragment belongs to. All fragments except, the last one in a datagram, must be a

57
Network Layer multiple of 8 bytes, the elementary fragment unit. Since 13 bits are provided, there is
a maximum of 8192 fragments per datagram, with a maximum datagram length of
65,536 bytes, one more than the Total length field.

The Time to live (8 bit) field is a counter used to limit packet lifetimes. It is supposed
to count time in seconds allowing a maximum lifetime of 255 sec. It must be
decremented on each hop and is supposed to be decremented multiple times when
queued for a long time in the router. In practice, it just counts hops. When it hits
zero, the packet is dropped and a warning packet is sent back to the source host. This
feature, prevents datagrams from wandering around forever, something that otherwise
might happen if the routing tables become corrupted.

Protocol (8 bits) is used when IP reaches its final destination. When, the network
layer has assembled a complete datagram, it needs to know what to do with it. The
Protocol field identifies the transport protocol the network layers needs to give it to.
TCP is one possibility, but so are UDP and some others. The numbering of protocols
is global across the entire Internet.

The Header checkum verifies the header only. Such a checkum is useful for detecting
errors generated by bad memory words inside a router. This algorithm is more robust
than a normal add. Note, that the Header checksum must be recomputed at each hop
because, at least one field always changes (the time to live field), but tricks can be
used to speed up the computation.

The Source address and Destination IP address: These fields carry the 32 bit IP
addresses of the source and destination address. One portion of the IP address
indicates the network and the other portions indicate the host (or router) on the
network. The IP addresses will be described in the next section.

The Option filed (32 bits): This field allows an IP header to be extended to be more
functional. It can carry fields that control routing, timing and security.

IP Addressing

All IP addresses are 32 bits long and are used in the Source address and Destination
address fields of IP packets.

In addition, to the physical addresses (contained on NICs) that identify individual


devices, the Internet requires an additional addressing convention, an address that
identifies the connection of a host to its network. Before discussing the IP addressing,
let us discuss, a host and a router. A router is, fundamentally different from a host
whose job is receive a datagram on an incoming link from a host and forward it on
some outgoing link. Both router and a host are connected to a link through an
interface. 4 Routers has multiple interfaces whereas, a host has a single interface.
Because every host and a router is capable of sending and receiving IP datagram, IP
requires each host and router interface to have its own IP address. Thus, an IP address
is technically associated with an interface rather than with the host or a router.

For several decades, IP addresses were divided into the five categories given in the
Figure. The different classes are designed to cover the needs of different types of
organisations.

The three main address classes are class A, class B, and class C. By examining the
first few bits of an address, IP software can quickly determine the class, and therefore,
the structure, of an address. IP follows these rules to determine the address class:
• Class A: If, the first bit of an IP address is 0, it is the address of a class A
network. The first bit of a class A address identifies the address class.

58
The next 7 bits identify the network, and the last 24 bits identify the host. There Internetworking
are fewer than 128 class A network numbers, but each class A network can be
composed of millions of hosts.

• Class B: If, the first 2 bits of the address are 1 0, it is a class B network address.
The first 2 bits identify class; the next 14 bits identify the network, and the last
16 bits identify the host. There are thousands of class B network numbers and
each class B network can contain thousands of hosts.
• Class C: If, the first 3 bits of the address are 1 1 0, it is a class C network
address. In a class C address, the first 3 bits are class identifiers; the next 21 bits
are the network address, and the last 8 bits identify the host. There are millions
of class C network numbers, but each class C network is composed of fewer
than 254 hosts.
• Class D: If, the first 4 bits of the address are 1 1 1 0, it is a multicast address.
These addresses are sometimes called class D addresses, but they don’t really
refer to specific networks. Multicast addresses are used to address groups of
computers together at moment in time. Multicast addresses, identify a group of
computers that share a common application, such as a video conference, as
opposed to a group of computers that share a common network.
• Class E: If, the first four bits of the address are 1 1 1 1, it is a special reserved
address. These addresses are called class E addresses, but they don’t really refer
to specific networks. No numbers are currently assigned in this range.

IP addresses are usually written as four decimal numbers separated by dots (periods).
Each of the four numbers is in the range 0-255 (the decimal values possible for a
single byte). Because the bits that identify class are contiguous with the network bits
of the address, we can lump them together and look at the address as composed of full
bytes of network address and full bytes of host address. If the value of the first byte is:
• Less than 128, the address is class A; the first byte is the network number, and
the next three bytes are the host address.
• From 128 to 191, the address is class B; the first two bytes identify the network,
and the last two bytes identify the host.
• From 192 to 223, the address is class C; the first three bytes are the network
address, and the last byte is the host number.
• From 224 to 239, the address is multicast. There is no network part. The entire
address identifies a specific multicast group.
• Greater than 239, the address is reserved.

The following table depicts each class range with other details.

Table 2: IP address classes in dotted decimal format with their ranges


IP High Format Range No. of No. of Max. Purpose
Address Order Network Bits Host Bits Hosts
Class Bit (s)
A 0 N.H.H.H 1.0.0.0 to 7 24 224-2 Few large
126.0.0.0 organisations
B 1,0 N.N.H.H 128.1.0.0 to 14 16 216-2 Medium-size
191.254.0.0 organisations
C 1,1,0 N.N.N.H 192.0.1.0 to 21 8 28-2 Relatively small
223.255.254.0 organisations
D 1,1,1,0 N/A 224.0.0.0 to N/A N/A N/A Multicast groups
239.255.255.255 (RFC 1112)
E 1,1,1,1 N/A 240.0.0.0 to N/A N/A N/A Future Use
254.255.255.255 (Experimental)

59
Network Layer The IP address, which provides, universal addressing across all the networks of the
Internet, is one of the great strengths of the TCP/IP protocol suite. However, the
original class structure of the IP address has weaknesses. The TCP/IP designers did
not envision the enormous scale of today’s network. When TCP/IP was being
designed, networking was limited to large organisations that could afford substantial
computer systems. The idea of a powerful UNIX system on every desktop did not
exist. At that time, a 32-bit address seemed so large that it was divided into classes to
reduce the processing load on routers, even though dividing the address into classes
sharply reduced the number of host addresses actually available for use. For example,
assigning a large network a single class B address, instead of six class C addresses,
reduced the load on the router because the router needed to keep only one route for
that entire organisation. However, an organisation that was given the class B address
probably did not have 64,000 computers, so most of the host addresses available to the
organisation were never assigned.

The class-structured address design was critically strained by the rapid growth of the
Internet. At one point it appeared that all class B addresses might be rapidly
exhausted. To prevent this, a new way of looking at IP addresses without a class
structure was developed.

Subnet Masks and CIDR Networks (Classless IP Addresses)


IP addresses are actually 32-bit binary numbers. Each 32-bit IP address consists of
two subaddresses, one identifying the network and the other identifying the host to the
network, with an imaginary boundary separating the two. The location of the
boundary between the network and host portions of an IP address is determined
through the use of a subnet mask. A subnet mask is another 32-bit binary number,
which acts like a filter when applied to the 32-bit IP address. By comparing a subnet
mask with an IP address, systems can determine the portion of the IP address that
relates to the network, and the portion that relates to the host. Wherever, the subnet
mask has a bit set to "1", the underlying bit in the IP address is part of the network
address. Wherever the subnet mask is set to "0", the related bit in the IP address is part
of the host address. For example, assume that the IP address
11000000101010000000000100010100 has a subnet mask of
11111111111111111111111100000000. In this example, the first 24 bits of the 32-bit
IP addresses are used to identify the network, while the last 8 bits are used to identify
the host on that network.

The size of a network (i.e., the number of host addresses available for use on it) is a
function of the number of bits used to identify the host portion of the address. If, a
subnet mask shows that 8 bits are used for the host portion of the address block, a
maximum of 256 possible host addresses are available for that specific network.
Similarly, if a subnet mask shows that 16 bits are used for the host portion of the
address block, a maximum of 65,536 possible host addresses are available for use on
that network.
If a network administrator needs to split a single network into multiple virtual
networks, the bit-pattern in use with the subnet mask can be changed to allow as many
networks as necessary. For example, assume that we want to split the 24-bit
192.168.10.0 network (which allows for 8 bits of host addressing, or a maximum of
256 host addresses) into two smaller networks. All we have to do in this situation is,
change the subnet mask of the devices on the network so that they use 25 bits for the
network instead of 24 bits, resulting in two distinct networks with 128 possible host
addresses on each network. In this case, the first network would have a range of
network addresses between 192.168.10.0 -192.168.10.127, while the second network
would have a range of addresses between 192.168.10.128 -192.168.10.255.

60
Networks can also be enlarged through the use of a technique known as Internetworking
“supernetting,” which works by extending the host portion of a subnet mask to the
left, into the network portion of the address. Using this technique, a pair of networks
with 24-bit subnet masks can be turned into a single large network with a 23-bit
subnet mask. However, this works only if you have two neighbouring 24-bit network
blocks, with the lower network having an even value (when the network portion of the
address is shrunk, the trailing bit from the original network portion of the subnet mask
should fall into the host portion of the new subnet mask, so the new network mask
will consume both networks). For example, it is possible to combine the 24-bit
192.168.10.0 and 192.168.11.0 networks together since the loss of the trailing bit from
each network (00001010 vs. 00001011) produces the same 23-bit subnet mask
(0000101x), resulting in a consolidated 192.168.10.0 network. However, it is not
possible to combine the 24-bit 192.168.11.0 and 192.168.12.0 networks, since the
binary values in the seventh bit position (00001011 vs. 00001100) do not match when
the trailing bit is removed.
Classless Inter-Domain Routing
In the modern networking environment defined by RFC 1519 [Classless Inter-Domain
Routing (CIDR)], the subnet mask of a network is typically annotated in written form
as a “slash prefix” that trails the network number. In the subnetting example in the
previous paragraph, the original 24-bit network would be written as 192.168.10.0/24,
while the two new networks would be written as 192.168.10.0/25 and
192.168.10.128/25. Likewise, when the 192.168.10.0/24 and 192.168.11.0/24
networks were joined together as a single supernet, the resulting network would be
written as 192.168.10.0/23. Note, that the slash prefix annotation is generally used for
human benefit; infrastructure devices still use the 32-bit binary subnet mask internally
to identify networks and their routes. All networks must reserve host addresses (made
up entirely of either ones or zeros), to be used by the networks themselves. This is so
that, each subnet will have a network-specific address (the all-zeroes address) and a
broadcast address (the all-ones address). For example, a /24 network allows for 8 bits
of host addresses, but only 254 of the 256 possible addresses are available for use.
Similarly, /25 networks have a maximum of 7 bits for host addresses, with 126 of the
128 possible addresses available (the all-ones and all-zeroes addresses from each
subnet must be set aside for the subnets themselves). All the systems on the same
subnet must use the same subnet mask in order to communicate with each other
directly. If, they use different subnet masks they will think they are on different
networks, and will not be able to communicate with each other without going through
a router first. Hosts on different networks can use different subnet maks, although the
routers will have to be aware of the subnet masks in use on each of the segments.

Subnet masks are used only by systems that need to communicate with the network
directly. For example, external systems do not need to be aware of the subnet masks in
use on your internal networks, since those systems will route data to your network by
way of your parent network’s address block. As such, remote routers need to know
only the provider’s subnet mask. For example, if you have a small network that uses
only a /28 prefix that is, a subset of your ISP’s /20 network, remote routers need to
know only about your upstream provider’s /20 network, while your upstream provider
needs to know your subnet mask in order to get the data to your local /28 network.
The rapid depletion of the class B addresses showed that three primary address classes
were not enough: class A was much too large and class C was much too small. Even a
class B address was too large for many networks but was used because it was better
than the other alternatives.

The obvious solution to the class B address crisis was to force organisations to use
multiple class C addresses. There were millions of these addresses available and they
were in no immediate danger of depletion. As is often the case, the obvious solution is
not as simple as it may seem. Each class C address requires its own entry within the

61
Network Layer routing table. Assigning thousands or millions of class C addresses would cause the
routing table to grow so rapidly that the routers would soon be overwhelmed. The
solution requires a new way of assigning addresses and a new way of looking at
addresses.

Originally network addresses were assigned in more or less sequential order as they
were requested. This worked fine when the network was small and centralised.
However, it did not take network topology into account. Thus, only random chance
would determine if the same intermediate routers would be used to reach network
195.4.12.0 and network 195.4.13.0, which makes it difficult to reduce the size of the
routing table. Addresses can only be aggregated if they are contiguous numbers and
are reachable through the same route. For example, if addresses are contiguous for one
service provider, a single route can be created for that aggregation because that
service provider will have a limited number of routes to the Internet. But if one
network address is in France and the next contiguous address is in Australia, creating
a consolidated route for these addresses will not work.

Today, large, contiguous blocks of addresses are assigned to large network service
providers in a manner that better reflects the topology of the network. The service
providers then allocate chunks of these address blocks to the organisations to which
they provide network services. This alleviates the short-term shortage of class B
addresses and, because the assignment of addressees reflects the topology of the
network, it permits route aggregation. Under this new scheme, we know that network
195.4.12.0 and network 195.4.13.0 are reachable through the same intermediate
routers. In fact, both these addresses are in the range of the addresses assigned to
Europe, 194.0.0.0 to 195.255.255.255. Assigning addresses that reflect the topology
of the network enables route aggregation, but does not implement it. As long as
network 195.4.12.0 and network 195.4.13.0 are interpreted as separate class C
addresses, they will require separate entries in the routing table. A new, flexible way
of defining addresses is therefore, needed.

Evaluating addresses according to the class rules discussed above limits the length of
network numbers to 8, 16, or 24 bits - 1, 2, or 3 bytes. The IP address, however, is not
really byte-oriented. It is 32 contiguous bits. A more flexible way to interpret the
network and host portions of an address is with a bit mask. An address bit mask works
in this way: if a bit is on in the mask, that equivalent bit in the address is interpreted as
a network bit; if a bit in the mask is off, the bit belongs to the host part of the address.
For example, if address 195.4.12.0 is interpreted as a class C address, the first 24 bits
are the network numbers and the last 8 bits are the host addresses. The network mask
that represents this is 255.255.255.0, 24 bits on and 8 bits off. The bit mask that is
derived from the traditional class structure is called the default mask or the natural
mask.
However, with bit masks we are no longer limited by the address class structure. A
mask of 255.255.0.0 can be applied to network address 195.4.0.0. This mask includes
all addresses from 195.4.0.0 to 195.4.255.255 in a single network number. In effect, it
creates a network number as large as a class B network in the class C address space.
Using bit masks to create networks larger than the natural mask is called supernetting,
and the use of a mask instead of the address class to determine the destination network
is called Classless Inter-Domain Routing (CIDR).

Specifying both the address and the mask is cumbersome when writing out addresses.
A shorthand notation has been developed for writing CIDR addresses. Instead of
writing network 172.16.26.32 with a mask of 255.255.255.224, we can write
172.16.26.32/27. The format of this notation is address/prefix-length, where prefix-
length is the number of bits in the network portion of the address. Without this
notation, the address 172.16.26.32 could easily be interpreted as a host address. RFC

62
1878 list all 32 possible prefix values. But little documentation is needed because the Internetworking
CIDR prefix is much easier to understand and remember than address classes. I know
that 10.104.0.19 is a class A address, but writing it as 10.104.0.19/8 shows me that
this address has 8 bits for the network number and therefore, 24 bits for the host
number. I don’t have to remember anything about the class A address structure.
Internet-Legal Versus Private Addressing

Although the pool of IP addresses is somewhat limited, most companies have no


problems obtaining them. However, many organisations have already installed
TCP/IP products on their internal networks without obtaining “legal” addresses from
the proper sources. Sometimes these addresses come from example books or are
simply picked at random (several firms use networks numbered 1.2.3.0, for example).
Unfortunately, since they are not legal, these addresses will not be usable when these
organisations attempt to connect to the Internet. These firms will eventually have to
reassign Internet-legal IP addresses to all the devices on their networks, or invest in
address-translation gateways that rewrite outbound IP packets so they appear to be
coming from an Internet-accessible host.

Even if an address-translation gateway is installed on the network, these firms will


never be able to communicate with any site that is a registered owner of the IP
addresses in use on the local network. For example, if you choose to use the 36.0.0.0/8
address block on your internal network, your users will never be able to access the
computers at Stanford University, the registered owner of that address block. Any
attempt to connect to a host at 36.x.x.x will be interpreted by the local routers as a
request for a local system, so those packets will never leave your local network.

Not all firms have the luxury of using Internet-legal addresses on their hosts, for any
number of reasons. For example, there may be legacy applications that use hardcode
addresses, or there may be too many systems across the organisation for a clean
upgrade to be successful. If you are unable to use Internet-legal addresses, you should
at least be aware that there are groups of “private” Internet addresses that can be used
on internal networks by anyone. These address pools were set-aside in RFC 1918, and
therefore, cannot be “assigned” to any organisation. The Internet’s backbone routers
are configured explicitly not to route packets with these addresses, so they are
completely useless outside an organisation’s internal network. The address blocks
available are listed in Table3.

Table 3: Private Addresses Provided in RFC 1918

Class Range of Addresses


A Any addresses in 10.x.x.x
B Addresses in the range of 172.16.x.x-172.31.x.x
C Addresses in the range of 192.168.0.x-192.168.255.x

Since these addresses cannot be routed across the Internet, you must use an address-
translation gateway or a proxy server in conjunction with them. Otherwise, you will
not be able to communicate with any hosts on the Internet.

An important note here is that, since, nobody can use these addresses on the Internet,
it is safe to assume that anybody who is using these addresses is also utilising an
address-translation gateway of some sort. Therefore, while you will never see these
addresses used as destinations on the Internet, if your organisation establishes a
private connection to a partner organisation that is using the same block of addresses
that you are using, your firms will not be able to communicate on the Internet. The
packets destined for your partner’s network will appear to be local to your network,
and will never be forwarded to the remote network.

63
Network Layer There are many other problems that arise from using these addresses, making their
general usage difficult for normal operations. For example, many application-layer
protocols embed addressing information directly into the protocol stream, and in order
for these protocols to work properly, the address-translation gateway has to be aware
of their mechanics. In the preceding scenario, the gateway has to rewrite the private
addresses (which are stored as application data inside the application protocol),
rewrite the UDP/TCP and IP checksums, and possibly rewrite TCP sequence numbers
as well. This is difficult to do even with simple and open protocols such as FTP, and
extremely difficult with proprietary, encrypted, or dynamic applications (these are
problems for many database protocols, network games, and voice-over-IP services, in
particular). These gateways almost never work for all the applications in use at a
specific location.

It is always better to use formally-assigned, Internet-legal addresses whenever


possible, even if, the hosts on your network do not necessarily require direct Internet
access. In those cases in which your hosts are going through a firewall or application
proxy of some sort, the use of Internet-legal addresses causes the least amount of
maintenance trouble over time. If, for some reason this is not possible, use one of the
private address pools described in Table 3. Do not use random, self-assigned
addresses if you can possibly avoid it, as this will only cause connectivity problems
for you and your users.

Fragmentation

Fragmentation is process of breaking bigger IP datagrams into smaller fragments.


Each network imposes some maximum size on its packets. Maximum payloads range
from 48 bytes (ATM cells) to 65,515 bytes (IP packets), although the payload size in
higher layers is often bigger.

What happens if the original host sends a source packet which is too large to be
handled by the destination network? The routing algorithm can hardly bypass the
destination.

Basically, the only solution to the problem is to allow routers to break up packets into
fragments.

The data in the IP datagram is broken among two or more smaller IP datagrams and
these smaller fragments are then sent over the outgoing link.

The real problems in the fragmentation process is reassembly of fragments into a


single IP datagram before sending it to the transport layer. If it is done at the network
level, it will decrease the performance of a router. Therefore, to keep the network
layer simple. IP V4 designer has left it to be done at the receiver level. When the
destination host receives a series of fragments from the same host, it examines the
identification number of a fragment, its flag number (the flag bit is set to 0 for the last
fragment whereas it is a set to 1 for all the other fragments). This is required because
one or more fragments may never reach because IP provides best efforts service.
Offset field is used to determine where the fragment fits within the original IP
datagram [Ref.5]. When all the packets have arrived, the destination host reassembles
them and sends it to the transport layer. The following table illustrates an example of
5000 bytes (20 bytes of IP header plus 4980 bytes of IP Payload arrives at a router
which must be forwarded to the outgoing link with a maximum transfer unit of 1500
bytes (equivalent to Ethernet frame size).

64
Table 4: Fragmentation Table Internetworking

Fragment Bytes ID Offset Flag


1st 1,480 bytes in the data Identification = Offset = 0 (meaning the data Flag = 1
Fragment field of the IP datagram 999 should be inserted (meaning there is
beginning at byte 0) more)
2nd 1,480 byte information Identification = Offset = 1,480 (meaning the Flag = 1
Fragment field 999 data should be inserted (meaning there is
beginning at byte 1,480) more)
3rd 1,480 byte information Identification = Offset = 2,960 (meaning the Flag = 1
Fragment field 999 data should be inserted (meaning there is
beginning at byte 2,960) more)
4th 220 bytes Identification = Offset = 4,440 (meaning the Flag = 0
Fragment ( =4,980‒1,480‒1,480‒ 999 data should be inserted (meaning this is
beginning at byte 4, 440) the last fragment
1,480)

This means that 4,980 data bytes in the original datagram must be allocated to four
separate segments (each of which are also IP datagram). The original datagram has
been stamped with an identification number 999. It is desirable to have a minimum
number of fragments because fragmentation and reassembling creates extra overheads
or a network system and a host. This is done by limiting the size of UDP and TCP
segments to small size.

4.3.2 Internet Control Message Protocol (ICMP)

It is a protocol used by hosts and routers to send notification of datagram problems.


The most typical use of ICMP is for error reporting. We have encountered errors such
as, “Destination network unreachable” while running a telnet, FTP or HTTP session.
This type of error reporting is done by ICMP [Ref.5]. As we are aware, IP is an
unreliable and connectionless protocol, due to which, very often, a datagram may not
reach the destination because of to a link failure, congestion etc. ICMP does reporting
of such cases to the sender.

ICMP is often considered part of IP but architecturally lies just above IP as ICMP
messages are carried inside IP packets. Similar to TCP and UDP segments which are
carried as IP payloads, ICMP messages are also carried as IP payloads. Note, that, a
datagram carries only the addresses of the original sender and the final destination. It
does not know the addresses of the previous router (s) that passed a message. For this
reason, ICMP can send messages only to the source and not to an intermediate router.
Student are required to refer to Reference [1] & [5] for details of message types.

4.3.3 OSPF: The Interior Gateway Routing Protocol

Open Shortest Path First (OSPF) has become standard interior gateway routing
protocol. It supports many advanced features [Ref.1 and Ref.5] to meet a long list of
requirements.

• The protocol specification should be publicly available. O in OSPF stands for


open. It means that OSPF is not a proprietary protocol.

• It is a dynamic algorithm one that adapts to changes in the topology


automatically and quickly.

• Load distribution: When multiple paths to a destination have the same cost
OSPF allows multiple paths to be used.

65
Network Layer • Support for hierarchy within a single routing domain. By 1988, the internet
had grown so large that no router was expected to know the entire topology.
OSPF was designed so that no router would have to do so.

• Security: All exchanges between OSPF routers are authenticated and allows
only trusted routers to participate. OSPF supports three kinds of connections
and networks [Ref.1]:

(i) Point-to-point lines between two routers only.


(ii) Multi-access networks with broadcasting (e.g., LANs).
(iii) Multi-access networks without broadcasting (e.g., Packet
Switched WANs).

OSPF identifies four types of routers:


(i) Internal routers (within one area).
(ii) Area border routers (connects two or more areas).
(iii) Backbone routers (performs routing within the backbone).
(iv) AS boundary routers (exchanges routing information with routers in other ASes).

OSPF allows a longer AS to be divided into smaller areas. Topology and details of
one area is not visible to others. Every AS has a backbone area (called Area 0) as
shown in Figure3.
AS boundary router Backbone
AS 1
Backbone router

Area

BGP Protocol connects the ASes


AS 2
Area border router

Internal router

Figure 3 : Working domain of OSPF

All areas are connected to the backbone possibly by a tunnel, so it is possible to move
from one area to another area through a backbone. The primary role of the backbone

66
area is to route traffic between the other areas in the AS. Inter area routing within the Internetworking
AS requires the follows movement of packets as shown below by arrows.

Host internal router of an area area border router of a


source area backbone area border router of a destination area
internal router of a destination area final destination

After having described all the components of OSPF, let us now conclude the topic by
describing its operation.

At its heart, however, OSPF is a link state protocol that uses flooding of link state
information and Dijkstra’s Least-Cost path algorithm. Using flooding each router
informs all other routers in its area of its neighbours and costs. This information allow
each router to construct the graph for its area (s) and computers the shortest path using
Dijkstra’s algorithm. This is done by backbone routers also. In addition backbone
routers accept information from the area border routers in order to compute the best
route from each backbone router to every other router. This information is propagated
back to area border routers, which advertise it within the their areas. In this manner,
the optimal route is selected.

4.3.4 BGP: The Exterior Gateway Routing Protocol

The purpose of Border Gateway Protocol is to enable two different ASes to exchange
routing information so that, IP traffic can flow across the AS border. A different
protocol is needed between the ASes because the objectives of an interior gateway and
exterior gateway routing protocol are different. Exterior gateway routing protocol
such as BGP is related to policy matters. BGP is fundamentally a distance vector
protocol but, it is more appropriately characterised as path vector protocol [Ref.5].
Instead of maintaining just the cost to each destination, each BGP router keeps track
of the path used [Ref.1]. Neighbouring BGP routers, known as BGP peers exchange
detailed information alongwith the list of ASes on a path to a given destination rather
than record cost information.

The main advantage of using BGP is to solve the count to infinity problem which is
illustrated in the following Figure 4.

A B C D

E G
F

H I J K
Figure 4 : Solution to Count to infinity problems in BGP

In this Figure 4 there are A, B, C, D, E, F, G, H, I, J and K routers.


Now consider G’s routing table. G uses G C D K path to forward a packet to K. As
discussed earlier whenever a router gives any routing information, it provides a
complete path.
For ex. From A, the path used to send a packet to K is ABCDK
From B-the path used is BCDK
From C-the path used is CGJK
From E-EFGJK
From H-HIJK.

67
Network Layer After receiving all the paths from the neighbours, G will find the best route available.
It will outright reject the path from C and E, since they pass through G itself.
Therefore, the choice left is between a route announced by B and H. BGP easily
solves count to infinity problems. Now, suppose C crashes or the line B-C is down.
Then if B receives, two routes from its 2 neighbours: ABCDK and FBCDK, then
these which can be rejected because it passes through C itself. Other distance vector
algorithms make the wrong choice because, they cannot tell which of their neighbours
have independent routes to their destination or not.

 Check Your Progress 2


1) What are the important categories of Network layer protocols?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
2) List the important OSPF routers and their purposes.
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
3) How is BGP different from other distance vector routing protocols?

……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………

4.4 SUMMARY

In this unit, we defined, a large number of concepts and protocols. Tunneling is used
when the source and destination networks are identical but the network, which lies in
between, is different. A subnet allows a network to be split into several parts for
Internet use but still acts like a single network to the outside world. It makes
management of IP addresses simpler. Another reason for creating a subnet is to,
establish security between different work groups. Internet is made of a large number
of autonomous regions controlled by a different organisation which can, use its own
routing algorithm inside. A routing algorithm within an autonomous region (such as
LAN, WAN) is called an interior gateway protocol, an algorithm for routing
between different autonomous regions are called exterior gateway routing
protocols.

4.5 SOLUTIONS/ANSWERS
Check Your Progress 1
1) The following are the important features in which one network differs form
another
• Protocols
• Addressing mechanism
• Size of a packet
• Quality of service
• Flow control
• Congestion control.

68
2) There are two such mechanisms: Internetworking

(i) Concatenated Virtual Circuit


(ii) Datagram Approach.

3) Tunneling is used when the source and destination networks are the same but
the network which lies in between is different. It uses a mechanism called
encapsulation, where data transfer unit of one protocol is enclosed inside a
different protocol.

Check Your Progress 2

1) There are three categories of network layer protocols:


• IP which defines network layer addressing the field in the datagram and the
action taken by routers and end systems on a datagram based on the values
in these fields.
• RIP, OSPF and BGP : They are used for routing purposes.
• ICMP : Mainly used for error reporting in datagrams.

2) OSPF distinguishes four classes of routers:


• Internal routers that are wholly within one area
• Area border routers that connect two or more areas
• Backbone routers that are on the backbone, and
• AS boundary routers that talk to routers in other ASes.

3) It is different from other BGP protocol because its router keeps track of the path
used instead of maintaining just the cost to each destination. Similarly, instead
of periodically giving each neighbour its estimated cost to each possible
destination, each BGP router tells its neighbours the exact path it is using.

4.6 FURTHER READINGS

1) Computer Networks, 4th Edition, A.S. Tanenbaum, Prentice Hall of India,


New Delhi.

2) Communication Networks fundamental concepts and key architecture, Leon


Garcia and Indra Widjaja, Tata McGraw Hill, New Delhi.

3) Data Communication and Networking, 2nd edition, Behrouz A. Forouzan, Tata


McGraw Hill, New Delhi.

4) Networks, 2nd Edition, Timothy S Ramteke, Pearson Edition, New Delhi.

5) Computer Networking A Top down Approach featuring the Internet, James F.


Kurose and Keith W. Ross Pearson Edition, New Delhi.

69
Network Layer

70
Internetworking

71

You might also like