0% found this document useful (0 votes)
48 views

Unit 3 & 4 Notes

Uploaded by

99210041290
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Unit 3 & 4 Notes

Uploaded by

99210041290
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

212CSE3302 – Computer Networks

UNIT III - THE NETWORK LAYER

The network layer is concerned with getting packets from the source all the way to the destination.

3.1. NETWORK LAYER DESIGN ISSUES


Store-and-Forward Packet Switching
The major components of the system are the carrier's equipment (routers connected by transmission lines), shown
inside the shaded oval, and the customers' equipment, shown outside the oval. Host H1 is directly connected to
one of the carrier's routers, A, by a leased line. In contrast, H2 is on a LAN with a router, F, owned and operated
by the customer. This router also has a leased line to the carrier's equipment. We have shown F as being outside
the oval because it does not belong to the carrier, but in terms of construction, software, and protocols, it is probably
no different from the carrier's routers.

This equipment is used as follows. A host with a packet to send transmits it to the nearest router, either on its own
LAN or over a point-to-point link to the carrier. The packet is stored there until it has fully arrived so the checksum
can be verified. Then it is forwarded to the next router along the path until it reaches the destination host, where it
is delivered. This mechanism is store-and-forward packet switching

Services Provided to the Transport Layer

The network layer provides services to the transport layer at the network layer/transport layer interface. An
important question is what kind of services the network layer provides to the transport layer. The network layer
services have been designed with the following goals in mind.
 The services should be independent of the router technology.
 The transport layer should be shielded from the number, type, and topology of the routers present.
 The network addresses made available to the transport layer should use a uniform numbering plan, even
across LANs and WANs.

Dr. M. Raja, KARE/CSE Page 95 of 155


212CSE3302 – Computer Networks

Implementation of Connectionless Service

If connectionless service is offered, packets are injected into the subnet individually and routed independently of
each other. No advance setup is needed. In this context, the packets are frequently called datagrams (in analogy
with telegrams) and the subnet is called a datagram subnet. If connection oriented service is used, a path from the
source router to the destination router must be established before any data packets can be sent. This connection is
called a VC (virtual circuit), in analogy with the physical circuits set up by the telephone system, and the subnet
is called a virtual-circuit subnet. In this section we will examine datagram subnets; in the next one we will examine
virtual-circuit subnets.

A has only two outgoing lines—to B and C—so every incoming packet must be sent to one of these routers, even
if the ultimate destination is some other router. A's initial routing table is shown in the figure under the label
''initially.'' As they arrived at A, packets 1, 2, and 3 were stored briefly (to verify their checksums). Then each was
forwarded to C according to A's table. Packet 1 was then forwarded to E and then to F. When it got to F, it was
encapsulated in a data link layer frame and sent to H2 over the LAN. Packets 2 and 3 follow the same route.
However, something different happened to packet 4. When it got to Ait was sent to router B, even though it is also
destined for F. For some reason, A decided to send packet 4 via a different route than that of the first three. Perhaps
it learned of a traffic jam somewhere along the ACE path and updated its routing table, as shown under the label
''later.'' The algorithm that manages the tables and makes the routing decisions is called the routing algorithm.

Implementation of Connection-Oriented Service


For connection-oriented service, we need a virtual-circuit network. Let us see how that works. The idea behind
virtual circuits is to avoid having to choose a new route for every packet sent, as in Fig. 5-2. Instead, when a
connection is established, a route from the source machine to the destination machine is chosen as part of the

Dr. M. Raja, KARE/CSE Page 96 of 155


212CSE3302 – Computer Networks

connection setup and stored in tables inside the routers. That route is used for all traffic flowing over the
connection, exactly the same way that the telephone system works. When the connection is released, the virtual
circuit is also terminated. With connection-oriented service, each packet carries an identifier telling which virtual
circuit it belongs to.

Host H1 has established connection 1 with host H2. It is remembered as the first entry in each of the routing tables.
The first line of A's table says that if a packet bearing connection identifier 1 comes in from H1, it is to be sent to
router C and given connection identifier 1. Similarly, the first entry at C routes the packet to E, also with connection
identifier 1. Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses
connection identifier and tells the subnet to establish the virtual circuit.

Comparison of Virtual-Circuit and Datagram Networks

Both virtual circuits and datagrams have their supporters and their detractors. We will now attempt to summarize
both sets of arguments.

Dr. M. Raja, KARE/CSE Page 97 of 155


212CSE3302 – Computer Networks

3.2. ROUTING ALGORITHMS

The main function of the network layer is routing packets from the source machine to the destination machine.
The routing algorithm is that part of the network layer software responsible for deciding which output line an
incoming packet should be transmitted on. If the subnet uses datagrams internally, this decision must be made
anew for every arriving data packet since the best route may have changed since last time.

If the subnet uses virtual circuits internally, routing decisions are made only when a new virtual circuit is being
set up. Thereafter, data packets just follow the previously established route. The latter case is sometimes called
session routing

Routing algorithms can be grouped into two major classes:


 Non Adaptive
 Adaptive.

Non Adaptive algorithms do not base their routing decisions on measurements or estimates of the current traffic
and topology. Instead, the choice of the route to use to get from I to J (for all I and J) is computed in advance, off-
line, and downloaded to the routers when the network is booted. This procedure is sometimes called static routing.

Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the topology, and usually the
traffic as well. Adaptive algorithms differ in where they get their information (e.g., locally, from adjacent routers,
or from all routers), when they change the routes

Dr. M. Raja, KARE/CSE Page 98 of 155


212CSE3302 – Computer Networks

The Optimality Principle

It states that if the router J is on the optimal path from router I to router K, then the optimal path from J to K also
falls along the same route.

Shortest Path Algorithm

 Let us begin our study of routing algorithms with a simple technique for computing optimal paths given a
complete picture of the network.
 To choose a route between a given pair of routers, the algorithm just finds the shortest path between them on
the graph.
 In the general case, the labels on the edges could be computed as a function of the distance, bandwidth, average
traffic, communication cost, measured delay, and other factors. By changing the weighting function, the
algorithm would then compute the ‘‘shortest’’ path measured according to any one of a number of criteria or
to a combination of criteria.

Dr. M. Raja, KARE/CSE Page 99 of 155


212CSE3302 – Computer Networks

 Several algorithms for computing the shortest path between two nodes of a graph are known. This one is due
to Dijkstra (1959) and finds the shortest paths between a source and all destinations in the network. Each node
is labeled (in parentheses) with its distance from the source node along the best known path.
 We want to find the shortest path from A to D. We start out by marking node A as permanent, indicated by a
filled-in circle. Then we examine, in turn, each of the nodes adjacent to A (the working node), relabeling each
one with the distance to A. If the network had more than one shortest path from A to D and we wanted to find
all of them, we would need to remember all of the probe nodes that could reach a node with the same distance.
 Having examined each of the nodes adjacent to A, we examine all the tentatively labeled nodes in the whole
graph and make the one with the smallest label permanent, as shown in Fig. 5-7(b). This one becomes the new
working node. We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the
distance from B to the node being considered is less than the label on that node, we have a shorter path, so the
node is relabeled.
 After all the nodes adjacent to the working node have been inspected and the tentative labels changed if
possible, the entire graph is searched for the tentatively labeled node with the smallest value. This node is
made permanent and becomes the working node for the next round. Figure 5-7 shows the first six steps of the
algorithm.

Flooding

When a routing algorithm is implemented, each router must make decisions based on local knowledge, not the
complete picture of the network. A simple local technique is flooding, in which every incoming packet is sent out
on every outgoing line except the one it arrived on.

Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some measures
are taken to damp the process. One such measure is to have a hop counter contained in the header of each packet
that is decremented at each hop, with the packet being discarded when the counter reaches zero.

A better technique for damming the flood is to have routers keep track of which packets have been flooded, to
avoid sending them out a second time. One way to achieve this goal is to have the source router put a sequence
number in each packet it receives from its hosts. Each router then needs a list per source router telling which
sequence numbers originating at that source have already been seen. If an incoming packet is on the list, it is not
flooded.

Distance Vector Routing

Computer networks generally use dynamic routing algorithms that are more complex than flooding, but more
efficient because they find shortest paths for the current topology. Two dynamic algorithms in particular, distance
vector routing and link state routing, are the most popular. In this section, we will look at the former algorithm.

Dr. M. Raja, KARE/CSE Page 100 of 155


212CSE3302 – Computer Networks

A distance vector routing algorithm operates by having each router maintain a table (i.e., a vector) giving the best
known distance to each destination and which link to use to get there. These tables are updated by exchanging
information with the neighbors. Eventually, every router knows the best link to reach each destination.

The distance vector routing algorithm is sometimes called by other names, most commonly the distributed
Bellman-Ford routing algorithm, after the researchers who developed it (Bellman, 1957; and Ford and Fulkerson,
1962). It was the original ARPANET routing algorithm and was also used in the Internet under the name RIP.

In distance vector routing, each router maintains a routing table indexed by, and containing one entry for each
router in the network. This entry has two parts: the preferred outgoing line to use for that destination and an
estimate of the distance to that destination.

The Count-to-Infinity Problem

The main issue with Distance Vector Routing (DVR) protocols is Routing Loops since Bellman-Ford Algorithm
cannot prevent loops. This routing loop in the DVR network causes the Count to Infinity Problem. Routing loops
usually occur when an interface goes down or two routers send updates at the same time.

So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries for each other.
B will know that it can get to C at a cost of 1, and A will know that it can get to C via B at a cost of 2.

If the link between B and C is disconnected, then B will know that it can no longer get to C via that link and will
remove it from its table. Before it can send any updates it’s possible that it will receive an update from A which
will be advertising that it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a route to C via
A at a cost of 3. A will then receive updates from B later and update its cost to 4. They will then go on feeding
each other bad information toward infinity which is called as Count to Infinity problem.

3.3. CONGESTION CONTROL ALGORITHM

When too many packets are present in the network it causes packet delay and loss of packet which degrades the
performance of the system. This situation is called congestion.

Dr. M. Raja, KARE/CSE Page 101 of 155


212CSE3302 – Computer Networks

The network layer and transport layer share the responsibility for handling congestions. One of the most effective
ways to control congestion is trying to reduce the load that transport layer is placing on the network.

Congestion Control refers to techniques and mechanisms that can either prevent congestion, before it happens,
or remove congestion, after it has happened.

In a network vulnerable to congestion, the more the total amount of sent packets reaches the maximum capacity,
the more the total amount of delivered packets decreases. This can result in a situation where nearly no packets
get delivered successfully. The goal of congestion control is to keep network capacity at its maximum level. The
following factors responsible for congestion:

• Slow network links


• Shortage of buffer space
• Slow processors

Congestion Control in computer networks covers several mechanisms getting the performance - capacity desirable
one and thus prevent the network from congestion. The generic term Congestion Control covers preventive as well
as reactive mechanisms.

Congestion Control can be realized by working on either side of the equation, thus by decreasing demand or by
increasing available resources. To control congestion in computer networks, it is important to take a look on the
architecture and implementation of the network, because any design decision affects the congestion control
strategy of the network and thus the demand or the available resources in the previous equation. Those design
decisions are called Policies.

One important policy is the connection mechanism. There are two fundamental distinct network types, connection-
oriented and connectionless networks. In connection-oriented network, all communication of a connection
between two endpoints is routed over the same gateways. In connectionless networks, single packets are routed
independently. Therefore, packets belonging to a single connection can be routed differently.

Admission Control as described in below is part of the connection-oriented networks concept, but it is conceptually
not a part of connectionless networks. However, state of the art network architectures combines the robustness of
the cheaper connectionless networks with the service quality advantages of connection-oriented networks. The
Quality of Service (QoS) research area deals with this topic. Congestion Control and resource allocation are two
sides of the same coin.
In a congested network,
where Total Demand > Available Resources is true.

Dr. M. Raja, KARE/CSE Page 102 of 155


212CSE3302 – Computer Networks

It is often desirable to allocate the rare resources fair, so that every participant gets an equal share of the networks
capacity. Packet queuing and service policies affect this subject. We will introduce Fair Queuing below as a
mechanism to ensure fairness for every source. Under the side effect of greater overhead, another concept ensures
fairness for every source-destination pair. Packet drop policies are related to the issue of dropping packets when
buffer space is to short to queue more incoming packets. Examples for this are Load Shedding and Active Queue
Management (AQM).

Policies that affects congestion control

Table summarizes policies that affect congestion control mechanisms on different network layers. In this topic we
only consider policies from the network and the transport layer.

Congestion can be controlled by increasing available resources. This becomes necessary when network resources
are congested over a long period of time. Stronger resources have to be build up, but that is a manual task and not
subject to Congestion Control algorithms implemented in network software or hardware.

Three examples to increase available resources:

1. Dial-up links can be added during high usage.


2. Power increases on satellite links to increase their bandwidth.
3. Network paths can be split, so that extra traffic is sent via routes that may not be considered as optimal
under low load. This solution is not yet researched well and therefore is not used in practice today. This
approach breaks with the concept of connection-oriented networks, because data is no longer routed
over the same network path.

Dr. M. Raja, KARE/CSE Page 103 of 155


212CSE3302 – Computer Networks

Here focus is on Congestion Control algorithms that decrease demand or service can be classified in various ways.
Congestion control mechanisms are divided into two categories, one category prevents the congestion from
happening and the other category removes congestion after it has taken place.

These two categories are:

1. Open loop
2. Closed loop

Open Loop Congestion Control


 In this method, policies are used to prevent the congestion before it happens.
 Congestion control is handled either by the source or by the destination.
 The various methods used for open loop congestion control are:

Retransmission Policy
 The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
 However, retransmission in general may increase the congestion in the network. But we need to implement
good retransmission policy to prevent congestion.
 The retransmission policy and the retransmission timers need to be designed to optimize efficiency and at
the same time prevent the congestion.

Window Policy
 To implement window policy, selective reject window method is used for congestion control.
 Selective Reject method is preferred over Go-back-n window as in Go-back-n method, when timer for a
packet times out, several packets are resent, although some may have arrived safely at the receiver. Thus,
this duplication may make congestion worse.
 Selective reject method sends only the specific lost or damaged packets.
 Acknowledgement Policy
Dr. M. Raja, KARE/CSE Page 104 of 155
212CSE3302 – Computer Networks

 The acknowledgement policy imposed by the receiver may also affect congestion.
 If the receiver does not acknowledge every packet it receives it may slow down the sender and help prevent
congestion.
 Acknowledgments also add to the traffic load on the network. Thus, by sending fewer acknowledgements
we can reduce load on the network.
 To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet to be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a time.

Discarding Policy
 A router may discard less sensitive packets when congestion is likely to happen.
 Such a discarding policy may prevent congestion and at the same time may not harm the integrity of the
transmission.

Admission Policy
 An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual
circuit networks.
 Switches in a flow first check the resource requirement of a flow before admitting it to the network.
 A router can deny establishing a virtual circuit connection if there is congestion in the “network or if there
is a possibility of future congestion.

Closed Loop Congestion Control

 Closed loop congestion control mechanisms try to remove the congestion after it happens.
 The various methods used for closed loop congestion control are:

Backpressure
 Back pressure is a node-to-node congestion control that starts with a node and propagates, in the opposite
direction of data flow.

 The backpressure technique can be applied only to virtual circuit networks. In such virtual circuit each
node knows the upstream node from which a data flow is coming.
 In this method of congestion control, the congested node stops receiving data from the immediate upstream
node or nodes.

Dr. M. Raja, KARE/CSE Page 105 of 155


212CSE3302 – Computer Networks

 This may cause the upstream node on nodes to become congested, and they, in turn, reject data from their
upstream node or nodes.
 As shown in fig node 3 is congested and it stops receiving packets and informs its upstream node 2 to slow
down. Node 2 in turns may be congested and informs node 1 to slow down. Now node 1 may create
congestion and informs the source node to slow down. In this way the congestion is alleviated. Thus, the
pressure on node 3 is moved backward to the source to remove the congestion.

Choke Packet

 In this method of congestion control, congested router or node sends a special type of packet called choke
packet to the source to inform it about the congestion.
 Here, congested node does not inform its upstream node about the congestion as in backpressure method.
 In choke packet method, congested node sends a warning directly to the source station i.e. the intermediate
nodes through which the packet has traveled are not warned.

Implicit Signaling
 In implicit signaling, there is no communication between the congested node or nodes and the source.
 The source guesses that there is congestion somewhere in the network when it does not receive any
acknowledgment. Therefore, the delay in receiving an acknowledgment is interpreted as congestion in the
network.
 On sensing this congestion, the source slows down.
 This type of congestion control policy is used by TCP.

Explicit Signaling
 In this method, the congested nodes explicitly send a signal to the source or destination to inform about
the congestion.
 Explicit signaling is different from the choke packet method. In choke packed method, a separate packet
is used for this purpose whereas in explicit signaling method, the signal is included in the packets that
carry data.
 Explicit signaling can occur in either the forward direction or the backward direction.
 • In backward signaling, a bit is set in a packet moving in the direction opposite to the congestion. This
bit warns the source about the congestion and informs the source to slow down.

Dr. M. Raja, KARE/CSE Page 106 of 155


212CSE3302 – Computer Networks

 • In forward signaling, a bit is set in a packet moving in the direction of congestion. This bit warns the
destination about the congestion. The receiver in this case uses policies such as slowing down the
acknowledgements to remove the congestion.

Traffic Shaping

All network nodes like hosts or gateways, can be sending agents, because they are able to send data to other
network nodes. In networks based on virtual circuits, the transmission rates and quality are negotiated with
receiving network nodes when a connection is initialized. A flow specification is the result of this negotiation.
Sending agents have to adhere to the flow specification and shape their traffic accordingly. In packet switched
networks the volume of traffic is changing quickly over time. This makes periodic renegotiations necessary in
those networks. Quality of service negotiation is called Traffic Policing. To control the sending rate of agents and
prevent the receiving agents from data overflow, one of the most established mechanisms on the network layer is
the Leaky Bucket algorithm.

Leaky Bucket Algorithm


The Leaky Bucket algorithm is a Traffic Shaping solution, categorized by as an open loop approach. Just imagine
a leaky bucket that has a hole on its ground. As you can see in figure 6.6 given below.

If the bucket is empty, no water drops out. If there is some water inside, water drops out with a constant rate. If
the bucket is full and additional water is filled in, the water overflows the bucket. This can be implemented as
follows. On a sender, a time-dependent queue ensures that an equal amount of data units is send per time interval.
On the one hand, data can be put fast into the queue until it is full. On the other hand, data always leave the queue
at the same rate. If the queue is full, no more packets can be stored in it and packet loss occurs. If the queue is
empty, no data leaves the queue. The Leaky Bucket algorithm can be implemented for packets or a constant amount
of bytes, send within each time interval. Using this algorithm, transmission peaks are flattened to a constant
transmission rate on a sending agent.

Dr. M. Raja, KARE/CSE Page 107 of 155


212CSE3302 – Computer Networks

Token Bucket Algorithm


A more flexible approach to control the sending rate on agents is the Token Bucket algorithm, also categorized as
an open-loop approach. This algorithm allows bursts for short transmission while making sure that no data is lost.
In contrast to the Leaky Bucket algorithm, not the data that is to be send but tokens are queued in a time-depended
queue. One token is needed to send a single portion of data.
Implementations contain a token counter that is incremented on every time interval, so that the counter grows over
time up until a maximum counter value is reached. The token counter is decremented by one for every data portion
sent. When the token counter is zero, no data can be transmitted. An example would be a token counter with a
maximum of 50 tokens. When no data is sent for a longer period of time, the counter will reach and stay at its
maximum. In this situation, when an application sends a series of 100 data portions through the network interface,
50 portions can be send immediately because there are 50 tokens available. After that, the remaining 50 portions
are stored and transmitted one by one on each time interval the token counter gets incremented.

Figure 6.7: Traffic Shaping with Leaky and Token Bucket Algorithms

Token Bucket’s characteristic influence on the traffic shape is displayed on diagram (c) in figure 6.7, where the
sample incoming burst displayed on diagram (a) is processed. First, there are enough buckets available to allow a
burst over a while. After that, the traffic is sent with a constant lower rate, because new tokens get available at a
constant rate of time.

Dr. M. Raja, KARE/CSE Page 108 of 155


212CSE3302 – Computer Networks

Token Bucket as well as Leaky Bucket algorithms can be implemented for packets or constant byte amounts. In
the case of Token Bucket with a constant amount of bytes for each token, it is possible to store fractions of tokens
for later transmissions, when they have not be fully consumed by sending packets. For example when a 100 byte
token is used to send a 50 byte packet, it is possible to store the rest of the token (50 bytes) for later transmissions.

Load Shedding

An open-loop approach to control congestion on routers is the Load Shedding mechanism. Load Shedding is a
way to instruct a router to drop packets from its queue, if it is congested. Note that the Load Shedding mechanism
only reacts on situations, where the router is already congested and no more packets can be stored in the queue.
Other queue management solutions like Active Queue Management (AQM) to monitor the queue state and take
actions to prevent the router from congestion. When dropping packets, the router has to decide which packets
should be dropped. There are two concepts for dropping packets. On the one hand the router can drop new
incoming packets. This approach can be optimal for file transmissions. Consider a router that has packets 7, 8 and
9 of a 10 packets long file in its full queue. Now packet 10 arrives. Assuming a receiving host drops packets that
arrive out of sequential order, the worst decision would be to drop packet 7, as it would lead to retransmission of
packets 7 to 10. Dropping packet 10 would be optimal, as it would be the only packet that would have to be
retransmitted. On the other hand for some multimedia and real-time applications, the value of new packets is higher
than the value of old packets. In those cases, the router can be instructed to drop old packets. The concept of
dropping new packets is called wine, dropping old packets is called milk. Wine is implemented as tail-drop on
queues, milk is implemented as head-drop.

Fair Queuing

As they say Congestion Control and resource allocation are two sides of the same coin. To enforce hosts and
routers to implement and use congestion control and avoidance algorithms described here, an open-loop
mechanisms called Fair Queuing was introduced. Imagine hosts and routers connected to a router that reduces
their data rates according to congestion control algorithms. A connected misbehaving sending host that does not
reduce its data rates
according to congestion control algorithms will overflow the router and use more resources than well behaving
hosts. To resolve this issue, a FIFO queue is introduced on every incoming line as shown in figure 6.8.
Packets to deliver are chosen from all queues by a round robin algorithm. This ensures, that hosts or routers that
misbehave do not use more resources and get a poorer service than they would, if they behaved well. Packets from
misbehaving hosts will be dropped when they overflow their incoming FIFO queue on a router.

Dr. M. Raja, KARE/CSE Page 109 of 155


212CSE3302 – Computer Networks

Admission Control

Generally used in closed-loop, Congestion Avoidance mechanism is Admission Control. Admission Control
manages the establishment of connections between connection endpoints in connection-oriented networks. When
a connection is established, the connected endpoints can rely on the expected service quality, as congestion cannot
occur. On the Internet, Admission Control cannot be applied as it is working connectionless.
In connection-oriented networks, the transmissions belonging to a connection are always routed over the same
network nodes. When a host tries to establish a connection, the virtual circuit has to be build up by all involved
network nodes that have the chance to decide, if they can handle an additional connection or not. Answering this
kind of question is not trivial, since it requires the knowledge of the current load on nodes as well as the expected
additional load that results from the additional connection. Clients must declare their demand of service quality to
aid this decision on the involved network nodes. For clients, this approach seems to be a bit restrictive. However,
it is a very stable solution. In networks where connection establishment is only possible when the network load
allows it, clients get a guarantee that the requested quality of service is fulfilled by the network. In such networks,
there is no congestion during normal operation. A problem is to ensure that all participants follow the policies they
advertised after the connections are established. If they use more service than they requested, the network can be
congested. In connection-oriented networks, the network nodes have to save information about every connection,
so it is possible for them to check, if clients do not exceed their contracts. If not, they should decrease their service
to the bad-behaving clients accordingly.

Dr. M. Raja, KARE/CSE Page 110 of 155


212CSE3302 – Computer Networks

Examples of connection-oriented networks using admission control to avoid congestion are telephone networks or
A synchronous Transfer Mode (ATM) networks. As Figure 6.9 shows admission control in computer networks.

3.4 IP ADDRESSING

INTRODUCTION
Computer needs to communicate with another computer somewhere else in the world. Usually, computers
communicate through the Internet. The packet transmitted by the sending computer may pass through several
LANs or WANs before reaching the destination computer. For this level of communication, we need a global
addressing scheme; we called this logical addressing or IP address.

IPv4 Address Format


Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport) and divides it into
packets. IP packet encapsulates data unit received from above layer and add to its own header information.
The encapsulated data is referred to as IP Payload. IP header contains all the necessary information to deliver the
packet at the other end.

Version − Version no. of Internet Protocol used (e.g. IPv4).


IHL − Internet Header Length; Length of entire IP header.
Type of service: Low Delay, High Throughput, Reliability (8 bits)
Total Length − Length of entire IP Packet (including IP header and IP Payload).
Identification − If IP packet is fragmented during the transmission, all the fragments contain same identification
number. to identify original IP packet, they belong to.
Flags − As required by the network resources, if IP Packet is too large to handle, these ‘flags’ tells if they can be
fragmented or not. In this 3-bit flag, the MSB is always set to ‘0’.
Fragment Offset − This offset tells the exact position of the fragment in the original IP Packet.

Dr. M. Raja, KARE/CSE Page 111 of 155


212CSE3302 – Computer Networks

Time to Live − To avoid looping in the network, every packet is sent with some TTL value set, which tells the
network how many routers (hops) this packet can cross. At each hop, its value is decremented by one and when
the value reaches zero, the packet is discarded.
Protocol − Tells the Network layer at the destination host, to which Protocol this packet belongs to, i.e. the next
level Protocol. For example protocol number of ICMP is 1, TCP is 6 and UDP is 17.
Header Checksum − This field is used to keep checksum value of entire header which is then used to check if the
packet is received error-free.
Source Address − 32-bit address of the Sender (or source) of the packet.
Destination Address − 32-bit address of the Receiver (or destination) of the packet.
Options − This is optional field, which is used if the value of IHL is greater than 5. These options may contain
values for options such as Security, Record Route, Time Stamp, etc.

IPv4 ADDRESSES
An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a device (for example,
a computer or a router) to the Internet. IPv4 addresses are unique. They are unique in the sense that each address
defines one, and only one, connection to the Internet. Two devices on the Internet can never have the same address
at the same time.

Address Space
A protocol such as IPv4 that defines addresses has an address space.
IPv4 uses 32-bit addresses, which means that the address space is 232 or 4,294,967,296 (more than 4 billion). This
means that, theoretically, if there were no restrictions, more than 4 billion devices could be connected to the
Internet. But the actual number is much less because of the restrictions imposed on the addresses.

Dr. M. Raja, KARE/CSE Page 112 of 155


212CSE3302 – Computer Networks

2.2. IPv4 Address Notations


There are two prevalent notations to show an IPv4 address:
a. Binary notation and
b. Dotted decimal notation.

a. Binary Notation
In binary notation, the IPv4 address is displayed as 32 bits. Each octet is often referred to as a byte. So it is common
to hear an IPv4 address referred to as a 32-bit address or a 4-byte address. The following is an example of an IPv4
address in binary notation:
01110101 10010101 00011101 00000010
b.Dotted-Decimal Notation
To make the IPv4 address more compact and easier to read, Internet addresses are usually written in decimal form
with a decimal point (dot) separating the bytes. The following is the dotted decimal notation of the above address:
117.149.29.2
Figure 3.12 shows an IPv4 address in both binary and dotted-decimal notation. Note that because each byte (octet)
is 8 bits, each number in dotted-decimal notation is a value ranging from 0 to 255.

Figure 3.12 Dotted-decimal notation and binary notation for an IPv4 address

Example: Change the following IPv4 addresses from binary notation to dotted-decimal notation.
a. 10000001 00001011 00001011 1101111
b. 11000001 10000011 00011011 1111111
Solution
We replace each group of 8 bits with its equivalent decimal number (see Appendix B) and add dots for separation.
a. 29.11.11.239
b.193.131.27.255

Example: Change the following IPv4 addresses from dotted-decimal notation to binary notation.
a. 11.56.45.78
b. 21.34.7.82
Solution
We replace each decimal number with its binary equivalent.
a. 01101111 00111000 00101101 1001110
b. 11011101 00100010 00000111 1010010

Dr. M. Raja, KARE/CSE Page 113 of 155


212CSE3302 – Computer Networks

Types of IPv4 Addressing Schemes


There are two types of IPv4 addressing schemes:
 Classful Addressing
 Classless Addressing

Classful Addressing

 In classful addressing, the address space is divided into five classes: A, B, C, D, and E.
 Each class occupies some part of the address space.
 If the address is given in binary notation, the first few bits can immediately tell us the class of the address.
 If the address is given in decimal-dotted notation, the first byte defines the class. Both methods are shown
in below figure

Example: Find the class of each address.


a. 00000001 00001011 00001011 1101111
b. 11000001 10000011 00011011 1111111
c. 14.23.120.8
d. 52.5.15.111
Solution
a. The first bit is 0. This is a class A address.
b. The first 2 bits are 1; the third bit is 0. This is a class C address.
c. The first byte is 14 (between 0 and 127); the class is A.
d. The first byte is 252 (between 240 and 255); the class is E.

Classes and Blocks

One problem with classful addressing is that each class is divided into a fixed number of blocks with each block
having a fixed size as shown in Table 19.1. Table 19.1 Number of blocks and block size in Classful IPv4 addressing

Dr. M. Raja, KARE/CSE Page 114 of 155


212CSE3302 – Computer Networks

Class Number of Blocks Block Size Application


Unicast
A 27=128 224=16,777,216
Unicast
B 214=16,384 216=65,536

21 8 Unicast
C 2 =2,097,152 2 =256
1 Multicast
D 228=268,435,456
1 28 Reserved
E 2 =268,435,456
 Class A addresses were designed for large organizations with a large number of attached hosts or routers.
 Class B addresses were designed for midsize organizations with tens of thousands of attached hosts or
routers.
 Class C addresses were designed for small organizations with a small number of attached hosts or routers.

Limitations of Classful Addressing:

 A block in class A address is too large for almost any organization. This means most of the addresses in
class A were wasted and were not used.
 A block in class B is also very large, probably too large for many of the organizations that received a class
B block.
 A block in class C is probably too small for many organizations.
 Class D addresses were designed for multicasting. Each address in this class is used to define one group
of hosts on the Internet. The Internet authorities wrongly predicted a need for 268,435,456 groups. This
never happened and many addresses were wasted here too.
 And lastly, the class E addresses were reserved for future use; only a few were used, resulting in another
waste of addresses.

Netid and Hostid


 In classful addressing, an IP address in class A, B, or C is divided into netid and hostid.
 These parts are of varying lengths, depending on the class of the address. Figure 19.2 shows some netid
and hostid bytes.
 In class A, one byte defines the netid and three bytes define the hostid.
 In class B, two bytes define the netid and two bytes define the hostid.
 In class C, three bytes define the netid and one byte defines the hostid.

Table 19.2 Default masks for classful addressing


Dr. M. Raja, KARE/CSE Page 115 of 155
212CSE3302 – Computer Networks

Mask
A mask (also called the default mask) is a 32-bit number made of contiguous 1s followed by contiguous 0s. The
masks for classes A, B, and C are shown in Table 19.2. The concept does not apply to classes D and E.

 The mask can help us to find the netid and the hostid. For example, the mask for a class A address has
eight 1s, which means the first 8 bits of any address in class A define the netid; the next 24 bits define the
hostid.
 The last column of Table 19.2 shows the mask in the form /n where n can be 8, 16, or 24 in classful
addressing.
 This notation is also called slash notation or Classless Interdomain Routing (CIDR) notation.

Address Depletion Problem


The fast growth of the Internet led to the near depletion of the available addresses in classful addressing scheme.
Yet the number of devices on the Internet is much less than the 232 address space. We have run out of class A and
B addresses, and a class C block is too small for most midsize organizations.

 One solution that has alleviated the problem is the idea of classless addressing.
 Classful addressing, which is almost obsolete, is replaced with classless addressing.

Classless Addressing
To overcome address depletion and give more organizations access to the Internet, classless addressing was
designed and implemented. In this scheme, there are no classes, but the addresses are still granted in blocks.

Address Blocks
 In classless addressing, when an entity, small or large, needs to be connected to the Internet, it is granted
a block (range) of addresses.
 The size of the block (the number of addresses) varies based on the nature and size of the entity. For
example, a household may be given only two addresses; a large organization may be given thousands of
addresses. An ISP, as the Internet service provider, may be given thousands or hundreds of thousands
based on the number of customers it may serve.
 The Internet authorities impose three restrictions on classless address blocks:
o The addresses in a block must be contiguous, one after another.
o The number of addresses in a block must be a power of 2 (1, 2, 4, 8, ... ).
o The first address must be evenly divisible by the number of addresses.
A better way to define a block of addresses is to select any address in the block and the mask. As we discussed
before, a mask is a 32-bit number in which the n leftmost bits are 1s and the 32 - n rightmost bits are 0s.

Dr. M. Raja, KARE/CSE Page 116 of 155


212CSE3302 – Computer Networks

 However, in classless addressing the mask for a block can take any value from 0 to 32. It is very convenient
to give just the value of n preceded by a slash (CIDR notation).
 In 1Pv4 addressing, a block of addresses can be defined as x.y.z.t/n in which x.y.z.t defines one of the
addresses and the /n defines the mask.
 The address and the /n notation completely define the whole block (the first address, the last address, and
the number of addresses).

First Address: The first address in the block can be found by setting the 32 - n rightmost bits in the binary notation
of the address to 0s.
Example
A block of addresses is granted to a small organization. We know that one of the addresses is 205.16.37.39/28.
What is the first address in the block?
Solution
The binary representation of the given address is 11001101 00010000 00100101 00100111. If we set 32 - 28
rightmost bits to 0, we get 11001101 0001000 00100101 0010000 or 205.16.37.32. This is actually the block
shown in Figure 19.3.

Last Address: The last address in the block can be found by setting the 32 - n rightmost bits in the binary notation
of the address to 1s.

Example:
Find the last address for the block in Example 19.6.
Solution
The binary representation of the given address is 11001101 00010000 00100101 00100111. If we set 32 - 28
rightmost bits to 1, we get 11001101 00010000 00100101 0010 1111 or 205.16.37.47. This is actually the block
shown in Figure 19.3.

Figure 19.3 A block of16 addresses granted to a small organization

Number of Addresses: The number of addresses in the block is the difference between the
last and first address. It can easily be found using the formula 232- n.

Dr. M. Raja, KARE/CSE Page 117 of 155


212CSE3302 – Computer Networks

Example: Find the number of addresses in Example 19.6.


Solution
The value of n is 28, which means that number of addresses is 232- 28 or 16.
Example
Another way to find the first address, the last address, and the number of addresses is to represent the mask as a
32-bit binary (or 8-digit hexadecimal) number. This is particularly useful when we are writing a program to find
these pieces of information. In Example 19.5 the /28 can be represented as 11111111 11111111 11111111
11110000 (twenty-eight 1s and four 0s). Find
a. The first address
b. The last address
c. The number of addresses
Solution

a. The first address can be found by ANDing the given addresses with the mask. ANDing here is done bit
by bit. The result of ANDing 2 bits is 1 if both bits are 1s; the result is 0 otherwise.

Address: 11001101 00010000 00100101 00100111


Mask: 11111111 11111111 11111111 11110000
First address: 11001101 00010000 00100101 0100000

b. The last address can be found by ORing the given addresses with the complement of the mask. ORing
here is done bit by bit. The result of ORing 2 bits is 0 if both bits are 0s; the result is 1 otherwise. The
complement of a number is found by changing each 1 to 0 and each 0 to 1.

Address: 11001101 00010000 00100101 00100111


Mask complement: 00000000 00000000 00000000 00001111
Last address: 11001101 00010000 00100101 00101111

c. The number of addresses can be found by complementing the mask, interpreting it as a decimal
number, and adding 1 to it.

Mask complement: 000000000 00000000 00000000 00001111


Number of addresses: 15 + 1 =16

Network Addresses

A very important concept in IP addressing is the network address. When an organization is given a block of
addresses, the organization is free to allocate the addresses to the devices that need to be connected to the
Internet.
 The first address in the class, however, is normally (not always) treated as a special address. The
first address is called the network address and defines the organization network.
 It defines the organization itself to the rest of the world. Usually the first address is the one that is
used by routers to direct the message sent to the organization from the outside.
Figure 19.4 shows an organization that is granted a 16-address block. The organization network is connected
to the Internet via a router. The router has two addresses. One belongs to the granted block; the other belongs
Dr. M. Raja, KARE/CSE Page 118 of 155
212CSE3302 – Computer Networks

to the network that is at the other side of the router. We call the second address x.y.z.t/n because we do not
know anything about the network it is connected to at the other side. All messages destined for addresses in
the organization block (205.16.37.32 to 205.16.37.47) are sent, directly or indirectly, to x.y.z.t/n. We say
directly or indirectly because we do not know the structure of the network to which the other side of the router
is connected.

Figure 19.4 A network configuration for the block 205.16.37.32/28

The first address in a block is normally not assigned to any device; it is used as the network address that
represents the organization to the rest of the world.

Hierarchy
IP addresses, like other addresses or identifiers we encounter these days, have levels of hierarchy.
Two-Level Hierarchy: No Subnetting

An IP address can define only two levels of hierarchy when not subnetted.
 The n leftmost bits of the address x.y.z.t/n define the network (organization network).
 The 32 – n rightmost bits define the particular host (computer or router) to the network.
 The two common terms are prefix and suffix.
 The part of the address that defines the network is called the prefix; the part that defines the host is
called the suffix. Figure 19.5 shows the hierarchical structure of an IPv4 address.

Figure 19.5 Two levels of hierarchy in an IPv4 address


Dr. M. Raja, KARE/CSE Page 119 of 155
212CSE3302 – Computer Networks

 The prefix is common to all addresses in the network; the suffix changes from one device to
another.
 Each address in the block can be considered as a two-level hierarchical structure: the leftmost n
bits (prefix) define the network; the rightmost 32 - n bits define the host.

Subnetting
An organization that is granted a large block of addresses may want to create clusters of networks (called
subnets) and divide the addresses between the different subnets. The rest of the world still sees the
organization as one entity; however, internally there are several subnets. All messages are sent to the
router address that connects the organization to the rest of the Internet; the router routes the message to the
appropriate subnets. The organization, however, needs to create small sub blocks of addresses, each assigned
to specific subnets. The organization has its own mask; each subnet must also have its own.

Figure 19.6 Configuration and addresses in a subnetted network

Example, suppose an organization is given the block 17.12.40.0/26, which contains 64 addresses. The
organization has three offices and needs to divide the addresses into three sub blocks of 32, 16, and 16 addresses.
We can find the new masks by using the following arguments:

1. Suppose the mask for the first subnet is n1, then 232- n1 must be 32, which means that n1 =27.

2. Suppose the mask for the second subnet is n2, then 232- n2 must be 16, which means that n2 = 28.

3. Suppose the mask for the third subnet is n3, then 232- n3 must be 16, which means that n3=28.
This means that we have the masks 27, 28, 28 with the organization mask being 26. Figure 19.6 shows one
configuration for the above scenario.
Let us check to see if we can find the subnet addresses from one of the addresses in the subnet.

Dr. M. Raja, KARE/CSE Page 120 of 155


212CSE3302 – Computer Networks

a. In subnet 1, the address 17.12.14.29/27 can give us the subnet address if we use the mask /27
because
Host: 00010001 00001100 00001110 00011101
Mask: /27
Subnet: 00010001 00001100 00001110 00000000 .... (17.12.14.0)
b. In subnet 2, the address 17.12.14.45/28 can give us the subnet address if we use the mask /28
because
Host: 00010001 00001100 00001110 00101101
Mask: /28
Subnet: 00010001 00001100 00001110 00100000 .... (17.12.14.32)

c. In subnet 3, the address 17.12.14.50/28 can give us the subnet address if we use the mask
/28 because
Host: 00010001 00001100 00001110 00110010
Mask: /28
Subnet: 00010001 00001100 00001110 00110000 .... (17.12.14.48)

Note that applying the mask of the network, /26, to any of the addresses gives us the network address
17.12.14.0/26. We can say that through subnetting, we have three levels of hierarchy. Note that in our example,
the subnet prefix length can differ for the subnets as shown in Figure 19.7.

Figure 19.7 Three-level hierarchy in an IPv4 address

More Levels of Hierarchy


The structure of classless addressing does not restrict the number of hierarchical levels. An organization can
divide the granted block of addresses into sub blocks. Each sub block can in turn be divided into smaller sub
blocks. And so on. One example of this is seen in the ISPs.
 A national ISP can divide a granted large block into smaller blocks and assign each of them to a
regional ISP.
 A regional ISP can divide the block received from the national ISP into smaller blocks and assign each
one to a local ISP.
 A local ISP can divide the block received from the regional ISP into smaller blocks and assign each one
to a different organization.

Dr. M. Raja, KARE/CSE Page 121 of 155


212CSE3302 – Computer Networks

 Finally, an organization can divide the received block and make several subnets out of it.

Address Allocation
 The next issue in classless addressing is address allocation. How are the blocks allocated? The ultimate
responsibility of address allocation is given to a global authority called the Internet Corporation for
Assigned Names and Addresses (ICANN).
 However, ICANN does not normally allocate addresses to individual organizations. It assigns a large
block of addresses to an ISP. Each ISP, in turn, divides its assigned block into smaller sub blocks and
grants the sub blocks to its customers.
 In other words, an ISP receives one large block to be distributed to its Internet users.
o This is called address aggregation: many blocks of addresses are aggregated in one block and granted to
one ISP.

Example
An ISP is granted a block of addresses starting with 190.100.0.0/16 (65,536 addresses). The
ISP needs to distribute these addresses to three groups of customers as follows:
a. The first group has 64 customers; each needs 256 addresses.
b. The second group has 128 customers; each needs 128 addresses.
c. The third group has 128 customers; each needs 64 addresses.

Design the sub blocks and find out how many addresses are still available after these allocations.
Solution
Figure 19.8 shows the situation.

Figure 19.8 An example of address allocation and distribution by an ISP

1. Group 1
For this group, each customer needs 256 addresses. This means that 8 (log2256) bits are needed to
define each host. The prefix length is then 32 - 8 =24. The addresses are

Dr. M. Raja, KARE/CSE Page 122 of 155


212CSE3302 – Computer Networks

1st Customer: 190.100.0.0/24 190.100.0.255/24


2nd Customer: 190.100.1.0/24 190.100.1.255/24
64th Customer: 190.100.63.0/24 190.100.63.255/24
Total =64 X 256 =16,384

2. Group2
For this group, each customer needs 128 addresses. This means that 7 (log2 128) bits are needed to define
each host. The prefix length is then 32 - 7 =25. The addresses are:

1st Customer: 190.100.64.0/25 190.100.64.127/25

2nd Customer: 190.100.64.128/25 190.100.64.255/25

128th Customer: 190.100.127.128/25 190.100.127.255/25

Total =128 X 128 = 16,384

3. Group3
For this group, each customer needs 64 addresses. This means that 6 (log2 64) bits are needed to each
host. The prefix length is then 32 - 6 =26. The addresses are

1st Customer: 190.100.128.0/26 190.100.128.63/26


2nd Customer: 190.100.128.64/26 190.100.128.127/26

128th Customer: 190.100.159.192/26 190.100.159.255/26

Total =128 X 64 = 8192

To understand what Variable Length Subnet Masking (VLSM) is, let's go through a business example.
Suppose a company has bought the IP address range 37.1.1.0/24 (256 addresses). The company has 5 offices,
as shown in figure 1 below:

We are tasked to subnet the 37.1.1.0/24 IP address block and assign each office an IP subnet. Let's see what the
two ways to do this are.

Dr. M. Raja, KARE/CSE Page 123 of 155


212CSE3302 – Computer Networks

What is FLSM?
The first approach to this task is to divide the 256-address block into four equal-sized subnets. This technique
is called Fixed Lenght Subnet Mask (FLSM). The benefit of this approach is that all subnets have the same
subnet mask, which makes the process very straightforward and less prone to errors. Figure 2 below illustrates
this example.

However, this method results in a significant waste of IP addresses. For example, office-4 has only 10 users,
but we assign a subnet with 64 IP addresses. Hence, 54 addresses sit unused. From the company's point of
view, this is a bad use of resources.

VLSM stands for Variable Length Subnet Mask. VLSM is a subnetting technique that allows network admins
to allocate IP addresses more efficiently using different subnet masks for different network segments. It provides
greater flexibility in assigning IP addresses by creating subnets of varying sizes based on the specific needs and
number of hosts in each subnet. This technique helps reduce the waste of IP addresses and better uses the
available IP space.

Dr. M. Raja, KARE/CSE Page 124 of 155


212CSE3302 – Computer Networks

Notice that using VLSM, we are left with 112 free IP addresses that we can allocate to another location in the
future. Compared to the FLSM approach, this method is much more efficient. The main idea is that VLSM
allows us to divide an IP address space into subnets of varying sizes based on the specific requirements of each
office. This helps to minimize the waste of IP addresses, as each subnet is assigned only the necessary number
of IPs instead of using fixed-size subnets that may be too large or too small for the purpose.

Network Address Translation (NAT)


The number of home users and small businesses that want to use the Internet is ever increasing. In the beginning,
a user was connected to the Internet with a dial-up line, which means that she was connected for a specific period
of time. An ISP with a block of addresses could dynamically assign an address to this user. An address was given
to a user when it was needed. But the situation is different today. Home users and small businesses can be
connected by an ADSL line or cable modem. In addition, many are not happy with one address; many have
created small networks with several hosts and need an IP address for each host. With the shortage of addresses,
this is a serious problem.

A quick solution to this problem is called network address translation (NAT).


 NAT enables a user to have a large set of addresses internally and one address, or a small set of addresses,
externally. The traffic inside can use the large set; the traffic outside, the small set.
 To separate the addresses used inside the home or business and the ones used for the Internet, the Internet
authorities have reserved three sets of addresses as private addresses, shown in Table 19.3.

Table 19.3 Addresses for private networks

 Any organization can use an address out of this set without permission from the Internet authorities.
Everyone knows that these reserved addresses are for private networks.
 They are unique inside the organization, but they are not unique globally. No router will forward a
packet that has one of these addresses as the destination address.
 The site must have only one single connection to the global Internet through a router that runs the
NAT software.

Figure 19.9 shows a simple implementation of NAT. As Figure 19.9 shows, the private network uses private
addresses. The router that connects the network to the global address uses one private address and one
global address. The private network is transparent to the rest of the Internet; the rest of the Internet sees
only the NAT router with the address 200.24.5.8.
Dr. M. Raja, KARE/CSE Page 125 of 155
212CSE3302 – Computer Networks

Figure 19.9 A NAT implementation

Address Translation
 All the outgoing packets go through the NAT router, which replaces the source address in the packet with
the global NAT address.
 All incoming packets also pass through the NAT router, which replaces the destination address in the packet
(the NAT router global address) with the appropriate private address. Figure 19.10 shows an example of
address translation.

Figure 19.10 Addresses in a NAT

IPv6 ADDRESSES

Despite all short-term solutions, such as classless addressing and NAT, address depletion is still a long-term
problem for the Internet. This and other problems in the IP protocol itself, such as lack of accommodation
for real-time audio and video transmission, and encryption and authentication of data for some applications,
have been the motivation for IPv6.

Structure
An IPv6 address consists of 16 bytes (octets); it is 128 bits long Hexadecimal Colon Notation
To make addresses more readable, IPv6 specifies hexadecimal colon notation. In this notation, 128 bits
is divided into eight sections, each 2 bytes in length. Two bytes in hexadecimal notation requires four
hexadecimal digits. Therefore, the address consists of 32 hexadecimal digits, with every four digits
separated by a colon, as shown in Figure 19.13.

Dr. M. Raja, KARE/CSE Page 126 of 155


212CSE3302 – Computer Networks

Figure 19.13 IPv6 address in binary and hexadecimal colon notation


Abbreviation
Although the IP address, even in hexadecimal format, is very long, many of the digits are zeros. In this
case, we can abbreviate the address. The leading zeros of a section (four digits between two colons) can be
omitted. Only the leading zeros can be dropped, not the trailing zeros (see Figure 19.14).

Figure 19.14 Abbreviated IPv6 addresses

Using this form of abbreviation, 0074 can be written as 74, 000F as F, and 0000 as 0. Note that 3210 cannot
be abbreviated. Further abbreviations are possible if there are consecutive sections consisting of zeros
only. We can remove the zeros altogether and replace them with a double semicolon. Note that this type of
abbreviation is allowed only once per address. If there are two runs of zero sections, only one of them can be
abbreviated. Reexpansion of the abbreviated address is very simple: Align the unabbreviated portions and
insert zeros to get the original expanded address.

Example: Expand the address 0:15::1:12:1213 to its original.


Solution
We first need to align the left side of the double colon to the left of the original pattern and the right side
of the double colon to the right of the original pattern to find now many 0s we need to replace the double
colon.

This means that the original address is

Dr. M. Raja, KARE/CSE Page 127 of 155


212CSE3302 – Computer Networks

Routing Information Protocol (RIP)


Routing Information Protocol or RIP is one of the first routing protocols to be created. RIP is used in both Local
Area Networks (LANs) and Wide Area Networks (WANs), and also runs on the Application layer of the OSI
model. There are multiple versions of RIP including RIPv1 and RIPv2. The original version or RIPv1
determines network paths based on the IP destination and the hop count of the journey.
RIPv1 interacts with the network by broadcasting its IP table to all routers connected to the network. RIPv2 is a
little more sophisticated than this and sends its routing table on to a multicast address. RIPv2 also uses
authentication to keep data more secure and chooses a subnet mask and gateway for future traffic. The main
limitation of RIP is that it has a maximum hop count of 15 which makes it unsuitable for larger networks.
Pros:
 Historical Significance: RIP is one of the oldest and widely recognized routing protocols.
 Operational Simplicity: It’s relatively straightforward to understand and implement.
 Application Layer Operation: Operates on the application layer, making it easy to manage and
configure.
 Multicast Capability (RIPv2): RIPv2 can multicast its routing table, providing a more efficient way to
communicate with other routers than broadcasting.
 Enhanced Security (RIPv2): RIPv2 offers authentication measures to enhance data security.
Cons:
 Maximum Hop Count: RIP’s maximum hop count of 15 restricts its use in larger networks.
 Lack of Scalability: Due to its hop count limitation, it is not suited for modern expansive networks.
 Broadcaster (RIPv1): RIPv1’s method of broadcasting its entire table can lead to increased traffic and
potential inefficiencies.
 Limited Route Metric: RIP uses hop count as its sole metric, which may not always represent the best
path in complex networks.
 Slower Convergence: RIP can be slower to adapt to network changes, leading to potential temporary
routing loops.

Open Shortest Path First (OSPF)


Open Shortest Path First or OSPF protocol is a link-state IGP that was tailor-made for IP networks using
the Shortest Path First (SPF) algorithm. The SPF routing algorithm is used to calculate the shortest path
spanning-tree to ensure efficient data transmission of packets. OSPF routers maintain databases detailing
information about the surrounding topology of the network. This database is filled with data taken from Link
State Advertisements (LSAs) sent by other routers. LSAs are packets that detail information about how many
resources a given path would take.
OSPF also uses the Dijkstra algorithm to recalculate network paths when the topology changes. This protocol
is also relatively secure as it can authenticate protocol changes to keep data secure. It is used by many

Dr. M. Raja, KARE/CSE Page 128 of 155


212CSE3302 – Computer Networks

organizations because it’s scalable to large environments. Topology changes are tracked and OSPF can
recalculate compromised packet routes if a previously-used route has been blocked.
Pros:
 Efficient Routing: Utilizes the Shortest Path First (SPF) algorithm to ensure optimal data packet
transmission.
 Detailed Network Insight: OSPF routers maintain a database on the network’s topology, offering a
detailed perspective on its structure.
 Dynamic Adaptability: Employs the Dijkstra algorithm to dynamically adjust to network topology
changes, ensuring continuity in data transmission.
 Security Features: Offers protocol change authentication to maintain data security, ensuring that only
authorized updates are made.
 Highly Scalable: Suitable for both small and large-scale network environments, making it versatile for
various organizational sizes.
Cons:
 Complex Configuration: Given its many features, OSPF can be complex to set up and maintain.
 Higher Overhead: Maintaining detailed databases and frequently recalculating routes can generate
more network overhead.
 Sensitive to Topology Changes: While OSPF can adapt to changes, frequent topology alterations can
cause performance dips as it recalculates routes.
 Resource Intensive: OSPF routers require more memory and CPU resources due to their database
maintenance and route recalculations.
 Potential for Large LSDB: In very large networks, the Link State Database (LSDB) can grow
significantly, necessitating careful design and segmenting.
Border Gateway Protocol (BGP)
Border Gateway Protocol or BGP is the routing protocol of the internet that is classified as a distance path
vector protocol. BGP was designed to replace EGP with a decentralized approach to routing. The BGP Best
Path Selection Algorithm is used to select the best routes for data packet transfers. If you don’t have any custom
settings then BGP will select routes with the shortest path to the destination.
However many administrators choose to change routing decisions to criteria in line with their needs. The best
routing path selection algorithm can be customized by changing the BGP cost community attribute. BGP
can make routing decisions based Factors such as weight, local preference, locally generated, AS_Path length,
origin type, multi-exit discriminator, eBGP over iBGP, IGP metric, router ID, cluster list and neighbor IP
address.
BGP only sends updated router table data when something changes. As a result, there is no auto-discovery of
topology changes which means that the user has to configure BGP manually. In terms of security, BGP protocol
can be authenticated so that only approved routers can exchange data with each other.
Dr. M. Raja, KARE/CSE Page 129 of 155
212CSE3302 – Computer Networks

Pros:
 Internet Backbone: As the primary routing protocol of the internet, BGP plays a pivotal role in global
data exchanges.
 Decentralized Design: Unlike its predecessor EGP, BGP’s decentralized nature ensures more robust
and adaptable network operations.
 Customizable Path Selection: BGP’s Best Path Selection Algorithm can be tailored to meet unique
network demands by adjusting attributes.
 Efficient Updates: Only transmitting updates when there’s a change, BGP reduces unnecessary network
traffic.
 Granular Routing Decisions: Administrators have a plethora of factors like weight, AS_Path length,
and IGP metric to inform routing decisions, allowing for a high degree of routing precision.
 Authentication: BGP provides security measures allowing only authorized routers to participate in data
exchanges, enhancing the security of routing updates.
Cons:
 Complex Configuration: BGP requires meticulous manual configuration since it doesn’t auto-discover
topology changes.
 Potential Instability: Mistakes or malicious actions in BGP configurations can inadvertently or
intentionally divert internet traffic, potentially leading to large-scale outages.
 Scalability Concerns: As the internet grows, BGP’s scalability, in its current form, might pose
challenges.
 Vulnerabilities: Despite authentication measures, BGP is historically susceptible to certain security
issues, like prefix hijacking.
 Learning Curve: Given its complexity and significance, mastering BGP can be challenging for many
network administrators.
 Convergence Time: BGP can sometimes take longer to converge after a network change compared to
some other protocols.

DHCP ARP and ICMP

DHCP (Dynamic Host Configuration Protocol): DHCP is a protocol that automatically assigns IP addresses to
devices on a network. When a device connects to a network, it sends a DHCP request, and the DHCP server
responds with an IP address, subnet mask, default gateway, and other network configuration information.

ARP (Address Resolution Protocol): ARP is a protocol that maps IP addresses to MAC addresses. When a device
wants to send a packet to another device on the same network, it uses ARP to find the MAC address associated
with the destination IP address.

Dr. M. Raja, KARE/CSE Page 130 of 155


212CSE3302 – Computer Networks

ICMP (Internet Control Message Protocol): ICMP is a protocol that is used for network diagnostics and
troubleshooting. ICMP messages are typically generated by network devices when errors occur, and they are used
to provide feedback to the sender about the status of the network. These protocols work together to enable
communication between devices on an IP network. DNS provides a way to translate human-readable domain
names into machine-readable IP addresses, DHCP simplifies the process of IP address assignment, ARP is used
to resolve IP addresses to MAC addresses for communication on the local network, and ICMP provides feedback
about the status of the network. Together, these protocols form the backbone of IP networking and make it
possible for devices to communicate with one another over the internet.

Exercises:

1. Write the Network, broadcast address, and a valid host range for question a through question
a. 192.168.100.25/30
b. . 192.168.100.37/28
c. 192.168.100.66/27
d. 192.168.100.17/29
e. 192.168.100.99/26
f. 192.168.100.99/25

2. What is the broadcast address of 192.168.192.10/29?


3. What is the subnet for host ID 10.16.3.65/23?
4. A large number of consecutive IP addresses are available starting at 192.168.0.0. Suppose that
four organizations, A, B, C, and D, request 2000, 4000, 8000, and 4000 addresses, respectively,
and in that order. For each of these, give the first IP address assigned, the last IP address
assigned, and the mask in the x.x.x.x/s notation.
5. Complete the following based on the decimal IP address.
Address Number of Number of Subnets Number of Hosts
Decimal IP Address x
Class Subnet Bits Host Bits (2 ) (2x – 2)
10.25.66.154/23
172.31.254.12/24
192.168.20.123/28
63.24.89.21/18
128.1.1.254/20
208.100.54.209/30
6. Consider the network shown below. Distance vector routing is used. Calculate C’s new routing table?

Dr. M. Raja, KARE/CSE Page 131 of 155


212CSE3302 – Computer Networks

Unit IV - Transport Layer

Introduction

The transport layer is the heart of the protocol hierarchy. The transport layer builds on the network layer to provide
data transport from a process on a source machine to a process on a destination machine with a desired level of
reliability that is independent of the physical networks currently in use. It provides the abstractions that
applications need to use the network.

THE TRANSPORT SERVICE


Services Provided to the Upper Layers
The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective data transmission service
to its users, normally processes in the application layer. To achieve this, the transport layer makes use of the
services provided by the network layer. The software and/or hardware within the transport layer that does the
work is called the transport entity. The transport entity can be located in the operating system kernel, in a library
package bound into network applications, in a separate user process, or even on the network interface card.

There are two types of network service


 Connection-oriented
 Connectionless
Connectionless versus Connection-Oriented Service
A transport-layer protocol can be either connectionless or connection-oriented.

Connectionless Service

In a connectionless service, the packets are sent from one party to another with no need for connection
establishment or connection release. The packets are not numbered, so they may be delayed, lost, or arrive out of
sequence. There is no acknowledgement either. One of the transport-layer protocols, User Datagram Protocol
(UDP), is connectionless.

Dr. M. Raja, KARE/CSE Page 132 of 155


212CSE3302 – Computer Networks

Connection-Oriented Service

In a connection-oriented service, a connection is first established between the sender and the receiver. Data are
then transferred. At the end, the connection is released. Transmission Control Protocol (TCP) is a connection-
oriented service.

Reliable versus Unreliable

The transport-layer service can be reliable or unreliable. If the application-layer protocol needs reliability, a
reliable transport-layer protocol is used to implement flow and error control. This means a slower and more
complex service. However, if the application program does not need reliability because it uses its own flow and
error control mechanism or if it needs fast service or the nature of the service does not demand flow and error
control (e.g. real time applications), an unreliable protocol can be used. There are two different transport-layer
protocols. UDP is connectionless and unreliable; TCP is connection-oriented and reliable.

Transport Service Primitives

 To allow users to access the transport service, the transport layer must provide some operations to
application programs, that is, a transport service interface. Each transport service has its own interface.

 The transport service is similar to the network service, but there are also some important differences.

 The main difference is that the network service is intended to model the service offered by real
networks. Real networks can lose packets, so the network service is generally unreliable.
 The (connection-oriented) transport service, in contrast, is reliable.

As an example, consider two processes connected by pipes in UNIX. They assume the connection between them
is perfect. They do not want to know about acknowledgements, lost packets, congestion, or anything like that.
What they want is a 100 percent reliable connection. Process A puts data into one end of the pipe, and process B
takes it out of the other.

A second difference between the network service and transport service is whom the services are intended for. The
network service is used only by the transport entities. Consequently, the transport service must be convenient and
easy to use.

Figure. The primitives for a simple transport service.

Eg: Consider an application with a server and a number of remote clients.


Dr. M. Raja, KARE/CSE Page 133 of 155
212CSE3302 – Computer Networks

1. The server executes a “LISTEN” primitive by calling a library procedure that makes a
System call to block the server until a client turns up.
2. When a client wants to talk to the server, it executes a “CONNECT” primitive, with “CONNECTION
REQUEST” TPDU sent to the server.
3. When it arrives, the TE unblocks the server and sends a “CONNECTION ACCEPTED” TPDU back to
the client.
4. When it arrives, the client is unblocked and the connection is established. Data can now be exchanged
using “SEND” and “RECEIVE” primitives.
5. When a connection is no longer needed, it must be released to free up table space within the 2 transport
entries, which is done with “DISCONNECT” primitive by sending “DISCONNECTION REQUEST”

 The term segment for messages sent from transport entity to transport entity.
 TCP, UDP and other Internet protocols use this term. Segments (exchanged by the transport layer) are
contained in packets (exchanged by the network layer).
 These packets are contained in frames (exchanged by the data link layer).When a frame arrives, the data
link layer processes the frame header and, if the destination address matches for local delivery, passes the
contents of the frame payload field up to the network entity.
 The network entity similarly processes the packet header and then passes the contents of the packet
payload up to the transport entity. This nesting is illustrated in Fig. 4.2.

Dr. M. Raja, KARE/CSE Page 134 of 155


212CSE3302 – Computer Networks

Figure 4.3 - A state diagram for a simple connection management scheme. Transitions labelled in italics
are caused by packet arrivals. The solid lines show the client's state sequence. The dashed lines show
the server's state sequence.

In fig. 4.3 each transition is triggered by some event, either a primitive executed by the local transport user or an
incoming packet. For simplicity, we assume here that each TPDU is separately acknowledged. We also assume
that a symmetric disconnection model is used, with the client going first. Please note that this model is quite
unsophisticated. We will look at more realistic models later on.
These primitives are socket primitives used in Berkley UNIX for TCP.
The socket primitives are mainly used for TCP. These sockets were first released as part of the Berkeley UNIX
4.2BSD software distribution in 1983. They quickly became popular. The primitives are now widely used for
Internet programming on many operating systems, especially UNIX -based systems, and there is a socket-style
API for Windows called ‘‘winsock.’’

The first four primitives in the list are executed in that order by servers.

Dr. M. Raja, KARE/CSE Page 135 of 155


212CSE3302 – Computer Networks

The SOCKET primitive creates a new endpoint and allocates table space for it within the transport entity. The
parameter includes the addressing format to be used, the type of service desired and the protocol. Newly created
sockets do not have network addresses.

 The BIND primitive is used to connect the newly created sockets to an address. Once a server has bound
an address to a socket, remote clients can connect to it.
 The LISTEN call, which allocates space to queue incoming calls for the case that several clients try to
connect at the same time.
 The server executes an ACCEPT primitive to block waiting for an incoming connection.
Some of the client side primitives are. Here, too, a socket must first be created
 The CONNECT primitive blocks the caller and actively starts process. When it completes, the client
process is unblocked and the connection is established.
 Both sides can now use SEND and RECEIVE to transmit and receive data over the full-duplex connection.
 Connection release with sockets is symmetric. When both sides have executed a CLOSE primitive, the
connection is released.

ELEMENTS OF TRANSPORT PROTOCOLS


The transport service is implemented by a transport protocol used between the two transport entities. The
transport protocols resemble the data link protocols. Both have to deal with error control, sequencing, and flow
control, among other issues. The difference transport protocol and data link protocol depends upon the
environment in which they are operated.
These differences are due to major dissimilarities between the environments in which the two protocols operate,
as shown in Fig.
At the data link layer, two routers communicate directly via a physical channel, whether wired or wireless,
whereas at the transport layer, this physical channel is replaced by the entire network. This difference has many
important implications for the protocols.

In the data link layer, it is not necessary for a router to specify which router it wants to talk to. In the
transport layer, explicit addressing of destinations is required.
In the transport layer, initial connection establishment is more complicated, as we will see. Difference
between the data link layer and the transport layer is the potential existence of storage capacity in the subnet

Dr. M. Raja, KARE/CSE Page 136 of 155


212CSE3302 – Computer Networks

Buffering and flow control are needed in both layers, but the presence of a large and dynamically varying
number of connections in the transport layer may require a different approach than we used in the data link layer.
The transport service is implemented by a transport protocol between the 2 transport entities.

Figure 6.8 illustrates the relationship between the NSAP, TSAP and transport connection. Application
processes, both clients and servers, can attach themselves to a TSAP to establish a connection to a remote TSAP.
These connections run through NSAPs on each host, as shown. The purpose of having TSAPs is that in
some networks, each computer has a single NSAP, so some way is needed to distinguish multiple transport end
points that share that NSAP.

The duties of transport layer protocols are:


1. Process to Process Communication
2. Addressing
3. Connection Establishment.
4. Connection Release.
5. Error control and flow control
6. Multiplexing.
7. Congestion control

The Process to Process Communication


The data link layer is responsible for delivery of frames between two neighboring nodes over a link. This
is called node-to-node delivery. The network layer is responsible for delivery of datagrams between two hosts.
This is called host-to-host delivery. Real communication takes place between two processes (application
programs). We need process-to-process delivery. The transport layer is responsible for process-to-process
delivery-the delivery of a packet, part of a message, from one process to another. Figure 4.1 shows these three
types of deliveries and their domains

Dr. M. Raja, KARE/CSE Page 137 of 155


212CSE3302 – Computer Networks

Client/Server Paradigm
Although there are several ways to achieve process-to-process communication, the most common one is through
the client/server paradigm. A process on the local host, called a client, needs services from a process usually on
the remote host, called a server. Both processes (client and server) have the same name. For example, to get the
day and time from a remote machine, we need a Daytime client process running on the local host and a Daytime
server process running on a remote machine. For communication, we must define the following:

Addressing
Whenever we need to deliver something to one specific destination among many, we need an address. At the data
link layer, we need a MAC address to choose one node among several nodes if the connection is not point-to-
point. A frame in the data link layer needs a Destination MAC address for delivery and a source address for the
next node's reply.

Figure 4.2 shows this concept.


The IP addresses and port numbers play different roles in selecting the final destination of data. The destination
IP address defines the host among the different hosts in the world. After the host has been selected, the port
number defines one of the processes on this particular host (see Figure 4.3).

Dr. M. Raja, KARE/CSE Page 138 of 155


212CSE3302 – Computer Networks

IANA Ranges
The IANA (Internet Assigned Number Authority) has divided the port numbers into three ranges: well known,
registered, and dynamic (or private), as shown in Figure 4.4.
Well-known ports. The ports ranging from 0 to 1023 are assigned and controlled by lANA. These are the well-
known ports.
Registered ports. The ports ranging from 1024 to 49,151 are not assigned or controlled by lANA. They can only
be registered with lANA to prevent duplication.
Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controlled nor registered. They can be used
by any process. These are the ephemeral ports.

Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port number, at each end to make a
connection. The combination of an IP address and a port number is called a socket address. The client socket
address defines the client process uniquely just as the server socket address defines the server process uniquely
(see Figure 4.5).
UDP or TCP header contains the port numbers.

Dr. M. Raja, KARE/CSE Page 139 of 155


212CSE3302 – Computer Networks

Connection establishment:

With packet lifetimes bounded, it is possible to devise a fool proof way to establish connections safely. Packet
lifetime can be bounded to a known maximum using one of the following techniques:
 Restricted subnet design
 Putting a hop counter in each packet
 Time stamping in each packet

Using a 3-way hand shake, a connection can be established. This establishment protocol doesn’t require both sides
to begin sending with the same sequence number.
The first technique includes any method that prevents packets from looping, combined with some way of
bounding delay including congestion over the longest possible path. It is difficult, given that internets may range
from a single city to international in scope.
The second method consists of having the hop count initialized to some appropriate value and decremented each
time the packet is forwarded. The network protocol simply discards any packet whose hop counter becomes zero.
The third method requires each packet to bear the time it was created, with the routers agreeing to discard any
packet older than some agreed-upon time.

In fig (A) Tomlinson (1975) introduced the three-way handshake.

 This establishment protocol involves one peer checking with the other that the connection request is
indeed current. Host 1 chooses a sequence number, x , and sends a CONNECTION REQUEST segment

Dr. M. Raja, KARE/CSE Page 140 of 155


212CSE3302 – Computer Networks

containing it to host 2. Host 2replies with an ACK segment acknowledging x and announcing its own
initial sequence number, y.
 Finally, host 1 acknowledges host 2’s choice of an initial sequence number in the first data segment that
it sends

In fig (B) the first segment is a delayed duplicate CONNECTION REQUEST from an old connection.
 This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this segment by sending
host1an ACK segment, in effect asking for verification that host 1 was indeed trying to set up a new
connection.
 When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was tricked by a
delayed duplicate and abandons the connection. In this way, a delayed duplicate does no damage.
 The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating around in
the subnet.
In fig (C) previous example, host 2 gets a delayed CONNECTION REQUEST and replies to it.

 At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence number for
host 2 to host 1 traffic, knowing full well that no segments containing sequence number y or
acknowledgements to y are still in existence.
 When the second delayed segment arrives at host 2, the fact that z has been acknowledged rather than y
tells host 2 that this, too, is an old duplicate.
 The important thing to realize here is that there is no combination of old segments that can cause the
protocol to fail and have a connection set up by accident when no one wants it.

Connection release
A connection is released using either asymmetric or symmetric variant. But, the improved protocol for
releasing a connection is a 3-way handshake protocol.
There are two styles of terminating a connection:
 1) Asymmetric release and
 2) Symmetric release.
Asymmetric release is the way the telephone system works: when one party hangs up, the connection is broken.
Symmetric release treats the connection as two separate unidirectional connections and requires each one to be
released separately.

Dr. M. Raja, KARE/CSE Page 141 of 155


212CSE3302 – Computer Networks

Fig-(a) Fig-(b) Fig-(c) Fig-(d)


One of the user sends a Initial process is done in If the second DR is lost, Same as in fig ( c)
DISCONNECTION the same way as in fig- the user initiating the except that all repeated
REQUEST TPDU in order to (a). If the final ACK- disconnection will not attempts to retransmit
initiate connection release. TPDU is lost, the receive the expected the DR is assumed to be
When it arrives, the recipient situation is saved by the response, and will failed due to lost
sends back a DR-TPDU, too, timer. timeout and starts all TPDUs. After ‘N’
and starts a timer. When this When the timer is over again. entries, the sender just
DR arrives, the original expired, the connection gives up and releases
sender sends back an is released. the connection.
ACKTPDU and releases the
connection.
Finally, when the ACK-
TPDU arrives, the receiver
also releases the connection.

Flow control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is overloaded
with too much data, then the receiver discards the packets and asking for the retransmission of packets. This
increases network congestion and thus, reducing the system performance. The transport layer is responsible for
flow control. It uses the sliding window protocol that makes the data transmission more efficient as well as it
controls the flow of data so that the receiver does not become overwhelmed. Sliding window protocol is byte
oriented rather than frame oriented.
Dr. M. Raja, KARE/CSE Page 142 of 155
212CSE3302 – Computer Networks

Error Control
 The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free
delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
 The data link layer also provides the error handling mechanism, but it ensures only node-to-node error-free
delivery. However, node-to-node reliability does not ensure the end-to-end reliability.
 The data link layer checks for the error between each network. If an error is introduced inside one of the
routers, then this error will not be caught by the data link layer. It only detects those errors that have been
introduced between the beginning and end of the link. Therefore, the transport layer performs the checking
for the errors end-to-end to ensure that the packet has arrived correctly.

Sequence Control
 The second aspect of the reliability is sequence control which is implemented at the transport layer.
 On the sending end, the transport layer is responsible for ensuring that the packets received from the upper
layers can be used by the lower layers. On the receiving end, it ensures that the various segments of a
transmission can be correctly reassembled.
Loss Control
 Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission
arrive at the destination, not some of them. On the sending end, all the fragments of transmission are given
sequence numbers by a transport layer. These sequence numbers allow the receiver’s transport layer to
identify the missing segment.
Duplication Control
 Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data
arrive at the destination. Sequence numbers are used to identify the lost packets; similarly, it allows the
receiver to identify and discard duplicate segments.

Multiplexing:
In networks that use virtual circuits within the subnet, each open connection consumes some table space
in the routers for the entire duration of the connection. If buffers are dedicated to the virtual circuit in each router
as well, a user who left a terminal logged into a remote machine, there is need for multiplexing. There are 2 kinds
of multiplexing:
Dr. M. Raja, KARE/CSE Page 143 of 155
212CSE3302 – Computer Networks

(a). UP-WARD MULTIPLEXING:


In the below figure, all the 4 distinct transport connections use the same network connection to the
remote host. When connect time forms the major component of the carrier’s bill, it is up to the transport layer to
group port connections according to their destination and map each group onto the minimum number of port
connections.

(b). DOWN-WARD MULTIPLEXING:


 If too many transport connections are mapped onto the one network connection, the performance will be
poor.
 If too few transport connections are mapped onto one network connection, the service will be expensive.
The possible solution is to have the transport layer open multiple connections and distribute the traffic among
them on round-robin basis, as indicated in the below figure:
With ‘k’ network connections open, the effective band width is increased by a factor of ‘k’.

Congestion Control

Congestion is a situation in which too many sources over a network attempt to send data and the router buffers
start overflowing due to which loss of packets occurs. As a result, the retransmission of packets from the sources
increases the congestion further. In this situation, the Transport layer provides Congestion Control in different
ways. It uses open-loop congestion control to prevent congestion and closed-loop congestion control to remove
the congestion in a network once it occurred. TCP provides AIMD – additive increases multiplicative decrease
and leaky bucket technique for congestion control.

Dr. M. Raja, KARE/CSE Page 144 of 155


212CSE3302 – Computer Networks

TRANSPORT PROTOCOLS - UDP


The Internet has two main protocols in the transport layer, a connectionless protocol and a connection-
oriented one. The protocols complement each other. The connectionless protocol is UDP. It does almost nothing
beyond sending packets between applications, letting applications build their own protocols on top as needed.
The connection-oriented protocol is TCP. It does almost everything. It makes connections and adds
reliability with retransmissions, along with flow control and congestion control, all on behalf of the applications
that use it. Since UDP is a transport layer protocol that typically runs in the operating system and protocols that
use UDP typically run in user s pace, these uses might be considered applications.

INTROUCTION TO UDP
 The Internet protocol suite supports a connectionless transport protocol called UDP (User Datagram Protocol).
UDP provides a way for applications to send encapsulated IP datagrams without having to establish a
connection.
 UDP transmits segments consisting of an 8-byte header followed by the pay-load. The two ports serve to
identify the end-points within the source and destination machines.
 When a UDP packet arrives, its payload is handed to the process attached to the destination port. This
attachment occurs when the BIND primitive. Without the port fields, the transport layer would not know what
to do with each incoming packet. With them, it delivers the embedded segment to the correct application.

 Source port, destination port: Identifies the end points within the source and destination machines.
 UDP length: Includes 8-byte header and the data
 UDP checksum: Includes the UDP header, the UDP data padded out to an even number of bytes if need be.
It is an optional field

REMOTE PROCEDURE CALL


 In a certain sense, sending a message to a remote host and getting a reply back is like making a function call
in a programming language. This is to arrange request-reply interactions on networks to be cast in the form of
procedure calls.
 For example, just imagine a procedure named get IP address (host name) that works by sending a UDP packet
to a DNS server and waiting or the reply, timing out and trying again if one is not forthcoming quickly enough.
In this way, all the details of networking can be hidden from the programmer.
 RPC is used to call remote programs using the procedural call. When a process on machine 1 calls a procedure
on machine 2, the calling process on 1 is suspended and execution of the called procedure takes place on 2.

Dr. M. Raja, KARE/CSE Page 145 of 155


212CSE3302 – Computer Networks

 Information can be transported from the caller to the callee in the parameters and can come back in the
procedure result. No message passing is visible to the application programmer. This technique is known as
RPC (Remote Procedure Call) and has become the basis for many networking applications.
 Traditionally, the calling procedure is known as the client and the called procedure is known as the server.
 In the simplest form, to call a remote procedure, the client program must be bound with a small library
procedure, called the client stub, that represents the server procedure in the client’s address space. Similarly,
the server is bound with a procedure called the server stub. These procedures hide the fact that the procedure
call from the client to the server is not local.

Step 1 is the client calling the client stub. This call is a local procedure call, with the parameters pushed onto the
stack in the normal way.
Step 2 is the client stub packing the parameters into a message and making a system call to send the message.
Packing the parameters is called marshaling.
Step 3 is the operating system sending the message from the client machine to the server machine.
Step 4 is the operating system passing the incoming packet to the server stub.
Step 5 is the server stub calling the server procedure with the unmarshaled parameters. The reply traces the same
path in the other direction.

The key item to note here is that the client procedure, written by the user, just makes a normal (i.e., local) procedure
call to the client stub, which has the same name as the server procedure. Since the client procedure and client stub
are in the same address space, the parameters are passed in the usual way.
Similarly, the server procedure is called by a procedure in its address space with the parameters it expects. To the
server procedure, nothing is unusual. In this way, instead of I/O being done on sockets, network communication
is done by faking a normal procedure call. With RPC, passing pointers is impossible because the client and server
are in different address spaces.

TCP (TRANSMISSION CONTROL PROTOCOL)

It was specifically designed to provide a reliable end-to end byte stream over an unreliable network. It was
designed to adapt dynamically to properties of the inter network and to be robust in the face of many kinds of
failures.
Dr. M. Raja, KARE/CSE Page 146 of 155
212CSE3302 – Computer Networks

Each machine supporting TCP has a TCP transport entity, which accepts user data streams from local processes,
breaks them up into pieces not exceeding 64kbytes and sends each piece as a separate IP datagram. When these
datagrams arrive at a machine, they are given to TCP entity, which reconstructs the original byte streams. It is up
to TCP to time out and retransmits them as needed, also to reassemble datagrams into messages in proper sequence.
The different issues to be considered are:
1. The TCP Service Model
2. The TCP Protocol
3. The TCP Segment Header
4. The Connection Management
5. TCP Transmission Policy
6. TCP Congestion Control
7. TCP Timer Management.
The TCP Service Model
TCP service is obtained by having both the sender and receiver create end points called SOCKETS
Each socket has a socket number(address)consisting of the IP address of the host, called a “PORT” ( = TSAP )
To obtain TCP service a connection must be explicitly established between a socket on the sending machine and
a socket on the receiving machine
All TCP connections are full duplex and point to point i.e., multicasting or broadcasting is not supported.
A TCP connection is a byte stream, not a message stream i.e., the data is delivered as chunks
E.g.: 4 * 512 bytes of data is to be transmitted.

Sockets:
A socket may be used for multiple connections at the same time. In other words, 2 or more connections may
terminate at same socket. Connections are identified by socket identifiers at same socket. Connections are
identified by socket identifiers at both ends. Some of the sockets are listed below:
Eg:
PORT-21 To establish a connection to a host to transfer a file using FTP
PORT-23 To establish a remote login session using TELNET

The TCP Protocol


A key feature of TCP, and one which dominates the protocol design, is that every byte on a TCP
connection has its own 32-bit sequence number.
Dr. M. Raja, KARE/CSE Page 147 of 155
212CSE3302 – Computer Networks

 When the Internet began, the lines between routers were mostly 56-kbps leased lines, so a host blasting away
at full speed took over 1 week to cycle through the sequence numbers.
 The basic protocol used by TCP entities is the sliding window protocol.
 When a sender transmits a segment, it also starts a timer.
 When the segment arrives at the destination, the receiving TCP entity sends back a segment (with data if any
exist, otherwise without data) bearing an acknowledgement number equal to the next sequence number it
expects to receive.
 If the sender's timer goes off before the acknowledgement is received, the sender transmits the segment again.

THE TCP SEGMENT HEADER


Every segment begins with a fixed-format, 20-byte header. The fixed header may be followed by header
options. After the options, if any, up to 65,535 - 20 - 20 = 65,495 data bytes may follow, where the first 20 refer
to the IP header and the second to the TCP header. Segments without any data are legal and are commonly used
for acknowledgements and control messages.
A TCP segment's header field can be anything from 20 to 60 bytes long. Here 40 bytes are used for the
options field, which is located at the end of the TCP header. A header is 20 bytes if there are no options field;
otherwise, it can be up to 60 bytes.

Header Fields
Source port- It is a 16-bit field that holds the port address of the application sending the data.
Destination Port- It is a 16-bit field that holds the port address of the application receiving the data.
Sequence Number- It is used to keep track of the bytes sent. Each byte in a TCP stream is uniquely identified by
the TCP sequence number, which is a four-byte number.
Acknowledgment number- It is a 32-bit field that contains the acknowledgment number or the byte number that
the receiver expects to receive next. It works as an acknowledgment for the previous data received successfully.

Dr. M. Raja, KARE/CSE Page 148 of 155


212CSE3302 – Computer Networks

Header Length (HLEN)- The header length is a 4-bit field that specifies the length of the TCP header. It helps in
knowing from where the actual data begins.
Flags- There are six control flags or bits:
URG: It indicates an urgent pointer. If URG is set, then the data is processed urgently.
ACK: It represents the acknowledgment field in a segment. If the ACK is set to 0, the data packet does not
contain an acknowledgment.
RST: It Resets the connection. If RST is set, then it requests to restart a connection.
PSH: If this field is set, the receiving device is requested to push the data directly to the receiver without
buffering it.
SYN: It initiates and establishes a connection between the hosts. If SYN is set, the device wants to establish
a secure connection; else, not.
FIN: It is used to terminate a connection. If FIN is 1, the device wants to terminate the connection; else, not.
Checksum - A checksum is a sequence of numbers and letters used to detect errors in data. It is a 16-bit field that
is optional in UDP but mandatory in TCP/IP.
Window size - It is a 16-bit field. This field specifies the size of data that the receiver can accept.
Urgent pointer - This field (valid only If the URG flag is set to 1) is used to indicate urgently needed data and
must be received as soon as possible. It specifies a value that will be appended to the sequence number to get the
last urgent byte's sequence number.

TCP 3-WAY HANDSHAKE PROCESS


TCP 3-way handshake process is used for establishing and terminating the connection between the client and
server.

Steps of a 3-Way Handshake for Establishing the Connection


The three steps involved in establishing a connection using the 3-way handshake process in TCP are as follows:
1. The client sends the SYN (synchronize) message to the server: When a client requests to connect to a
server, it sends the message to the server with the SYN flag set to 1. The message also includes:
 The sequence number (any random 32-bit number).
 The ACK (which is set to 0 in this case).
 The window size.
 The maximum segment size. For example, if the window size is 3000 bits and the maximum segment
size is 300 bits, the connection can send a maximum of 10 data segments (3000/300 = 10).
2. The server responds with the SYN and the ACK (synchronize-acknowledge) message to the client: After
receiving the synchronization request, the server sends the client an acknowledgment by changing
the ACK flag to '1'. The ACK's acknowledgment number is one higher than the sequence number received.
If the client sends an SYN with a sequence number of 2000, the server will send
the ACK using acknowledgment number = 20001. If the server wants to create the connection, it sets

Dr. M. Raja, KARE/CSE Page 149 of 155


212CSE3302 – Computer Networks

the SYN flag to '1' and transmits it to the client. The SYN sequence number used here will be different
from the SYN used by the client. The server also informs the client of its window size and maximum
segment size. After this step is completed, the connection is established from the client to the server.
3. The client sends the ACK (acknowledge) message to the server: The client will set the ACK flag to '1'
after receiving the SYN from the server and transmits it with an acknowledgment number 1 greater than
the server's SYN sequence number. The SYN flag has been set to '0' in this case. The connection between
the server and the client is now formed after this phase is completed.

Steps of a 3-Way Handshake for Terminating the Connection

Most implementations today allow three-way and four-way handshaking with a half-close option for connection
termination. Here we only mentioned the steps of three-way handshaking for connection termination. The three
steps involved in terminating a connection using the 3-way handshake process in TCP are as follows:
1. The client sends the FIN (finish) message to the server: When the client decides to disconnect from the
network, it transmits the message to the server with a random sequence number and sets the FIN flag to
'1'. ACK is set to 0 in this case.
2. The server responds with the FIN and the ACK (finish-acknowledge) message to the client: After
receiving the request, the server acknowledges the client's termination request by changing the ACK flag
to '1'. The ACK's acknowledgment number is one higher than the sequence number received. If the client
sends a FIN with a sequence number of 2000, the server will send the ACK using acknowledgment
number = 20001. If the server also decides to terminate the connection, it sets the FIN flag to '1' and
transmits it to the client. The FIN sequence number used here will be different from the FIN used by the
client. After this step is completed, the connection between the client to the server is disconnected.
3. The client sends the ACK (acknowledge) message to the server: The client will set the ACK flag to '1'
after receiving the FIN from the server and transmits it with an acknowledgment number 1 greater than
the server's FIN sequence number. The FIN flag is set to '0' in this case. After this step is completed, the
connection is also disconnected from the server to the client.

Dr. M. Raja, KARE/CSE Page 150 of 155


212CSE3302 – Computer Networks

TCP Connection Management Modeling


The steps required establishing and release connections can be represented in a finite state machine with the 11
states listed in Fig. 4.13. In each state, certain events are legal. When a legal event happens, some action may be
taken. If some other event happens, an error is reported.

TCP Connection management from server’s point of view:


1. The server does a LISTEN and settles down to see who turns up.
2. When a SYN comes in, the server acknowledges it and goes to the SYNRCVD state
3. When the servers SYN is itself acknowledged the 3-way handshake is complete and server goes to the
ESTABLISHED state. Data transfer can now occur.
4. When the client has had enough, it does a close, which causes a FIN to arrive at the server [dashed box marked
passive close].
5. The server is then signaled.
6. When it too, does a CLOSE, a FIN is sent to the client.
7. When the client’s acknowledgement shows up, the server releases the connection and deletes the
connection record.

Dr. M. Raja, KARE/CSE Page 151 of 155


212CSE3302 – Computer Networks

TCP Transmission Policy

1. In the above example, the receiver has 4096-byte buffer.


2. If the sender transmits a 2048-byte segment that is correctly received, the receiver will acknowledge the
segment.
3. Now the receiver will advertise a window of 2048 as it has only 2048 of buffer space, now.

Dr. M. Raja, KARE/CSE Page 152 of 155


212CSE3302 – Computer Networks

4. Now the sender transmits another 2048 bytes which are acknowledged, but the advertised window is’0’.
5. The sender must stop until the application process on the receiving host has removed some data from the buffer,
at which time TCP can advertise a layer window.

TCP CONGESTION CONTROL:

TCP does to try to prevent the congestion from occurring in the first place in the following way:
When a connection is established, a suitable window size is chosen and the receiver specifies a window based on
its buffer size. If the sender sticks to this window size, problems will not occur due to buffer overflow at the
receiving end. But they may still occur due to internal congestion within the network. Let’s see this problem
occurs.

In fig (a): We see a thick pipe leading to a small- capacity receiver. As long as the sender does not send more water
than the bucket can contain, no water will be lost.
In fig (b): The limiting factor is not the bucket capacity, but the internal carrying capacity of the n/w. if too much
water comes in too fast, it will backup and some will be lost.
 When a connection is established, the sender initializes the congestion window to the size of the max
segment in use our connection.
 It then sends one max segment. if this max segment is acknowledged before the timer goes off, it adds one
segments worth of bytes to the congestion window to make it two maximum size segments and sends 2
segments.
 As each of these segments is acknowledged, the congestion window is increased by one max segment
size.
 When the congestion window is ‘n’ segments, if all ‘n’ are acknowledged on time, the congestion window
is increased by the byte count corresponding to ‘n’ segments.
 The congestion window keeps growing exponentially until either a time out occurs or the receiver’s
window is reached.

Dr. M. Raja, KARE/CSE Page 153 of 155


212CSE3302 – Computer Networks

 The internet congestion control algorithm uses a third parameter, the “threshold” in addition to receiver
and congestion windows.

TCP TIMER MANAGEMENT:


TCP uses 3 kinds of timers:
1. Retransmission timer
2. Persistence timer
3. Keep-Alive timer.
Retransmission timer:
When a segment is sent, a timer is started. If the segment is acknowledged before the timer expires, the timer is
stopped. If on the other hand, the timer goes off before the acknowledgement comes in, the segment is
retransmitted and the timer is started again. The algorithm that constantly adjusts the time-out interval, based on
continuous measurements of n/w performance was proposed by JACOBSON and works as follows:
 for each connection, TCP maintains a variable RTT, that is the best current estimate of the round trip time to
the destination inn question.
 When a segment is sent, a timer is started, both to see how long the acknowledgement takes and to trigger a
retransmission if it takes too long.
 If the acknowledgement gets back before the timer expires, TCP measures how long the measurements took
say M
Persistence timer:
 It is designed to prevent the following deadlock:
 The receiver sends an acknowledgement with a window size of ‘0’ telling the sender to wait later, the receiver
updates the window, but the packet with the update is lost now both the sender and receiver are waiting for
each other to do something
 when the persistence timer goes off, the sender transmits a probe to the receiver the response to the probe gives
the window size
 if it is still zero, the persistence timer is set again and the cycle repeats
 if it is non zero, data can now be sent
Keep-Alive timer: When a connection has been idle for a long time, this timer may go off to cause one side to
check if other side is still there. If it fails to respond, the connection is terminated.

Dr. M. Raja, KARE/CSE Page 154 of 155

You might also like