0% found this document useful (0 votes)
35 views24 pages

UNIT-3 (Networking With TCP-IP Notes)

The document discusses several key aspects of TCP reliable data transfer: [1] TCP provides reliable data transfer between applications even though the underlying IP layer is unreliable. It uses mechanisms like checksums, acknowledgements, and retransmissions to ensure reliable delivery. [2] TCP establishes a connection using a three-way handshake to synchronize sequence numbers between the sender and receiver before transmitting data. [3] It uses a sliding window approach and acknowledgements to provide flow and congestion control and retransmit lost or corrupted segments. Timers are used to trigger retransmissions if acknowledgements are not received within a timeout period. [4] TCP connections are terminated using a four-step process involving
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views24 pages

UNIT-3 (Networking With TCP-IP Notes)

The document discusses several key aspects of TCP reliable data transfer: [1] TCP provides reliable data transfer between applications even though the underlying IP layer is unreliable. It uses mechanisms like checksums, acknowledgements, and retransmissions to ensure reliable delivery. [2] TCP establishes a connection using a three-way handshake to synchronize sequence numbers between the sender and receiver before transmitting data. [3] It uses a sliding window approach and acknowledgements to provide flow and congestion control and retransmit lost or corrupted segments. Timers are used to trigger retransmissions if acknowledgements are not received within a timeout period. [4] TCP connections are terminated using a four-step process involving
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR

(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)


NAAC Accredited with A++ Grade

Networking With TCP/IP-150512


UNIT-3
Principle of TCP Reliable Data Transfer Protocol: -
Transport Layer Protocols are central piece of layered architectures, these provides the logical
communication between application processes. These processes use the logical communication
to transfer data from transport layer to network layer and this transfer of data should be reliable
and secure. The data is transferred in the form of packets but the problem occurs in reliable
transfer of data.

The problem of transferring the data occurs not only at the transport layer, but also at the
application layer as well as in the link layer. This problem occurs when a reliable service runs
on an unreliable service, for example, TCP (Transmission Control Protocol) is a reliable data
transfer protocol that is implemented on top of an unreliable layer, i.e., Internet Protocol (IP)
is an end-to-end network layer protocol.

Figure: Study of Reliable Data Transfer

In this model, we have designed the sender and receiver sides of a protocol over a reliable
channel. In the reliable transfer of data, the layer receives the data from the above layer breaks
the message in the form of segment and put the header on each segment and transfer. Below
layer receives the segments and remove the header from each segment and make it a packet by
adding to header.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

The data which is transferred from the above has no transferred data bits corrupted or lost, and
all are delivered in the same sequence in which they were sent to the below layer this is reliable
data transfer protocol. This service model is offered by TCP to the Internet applications that
invoke this transfer of data.

Figure: Study of Unreliable Data Transfer

Similarly in an unreliable channel we have design the sending and receiving side. The sending
side of the protocol is called from the above layer to rdt_send() then it will pass the data that is
to be delivered to the application layer at the receiving side (here rdt-send() is a function for
sending data where rdt stands for reliable data transfer protocol and _send() is used for the
sending side).

On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used
for receiving side), will be called when a packet arrives from the receiving side of the unreliable
channel. When the rdt protocol wants to deliver data to the application layer, it will do so by
calling deliver_data() (where deliver_data() is a function for delivering data to upper layer).

In reliable data transfer protocol, we only consider the case of unidirectional data transfer, that
is transfer of data from the sending side to receiving side (i.e., only in one direction). In case
of bidirectional (full duplex or transfer of data on both the sides) data transfer is conceptually
more difficult. Although we only consider unidirectional data transfer but it is important to note
that the sending and receiving sides of our protocol will needs to transmit packets in both
directions, as shown in above figure.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

In order to exchange packets containing the data that is needed to be transferred the both
(sending and receiving) sides of rdt also need to exchange control packets in both direction
(i.e., back and forth), both the sides of rdt send packets to the other side by a call to udt_send
() (udt_send () is a function used for sending data to other side where udt stands for unreliable
data transfer protocol).

TCP Connection Establishment and Release: -


To make the transport services reliable, TCP hosts must establish a connection-oriented session
with one another. Connection establishment is performed by using the three-way handshake
mechanism. A three-way handshake synchronizes both ends of a network by enabling both
sides to agree upon original sequence numbers.

This mechanism also provides that both sides are ready to transmit data and learn that the other
side is available to communicate. This is essential so that packets are not shared or retransmitted
during session establishment or after session termination. Each host randomly selects a
sequence number used to track bytes within the stream it is sending and receiving.

The three-way handshake proceeds in the manner shown in the figure below −

The requesting end (Host A) sends an SYN segment determining the server's port number that
the client needs to connect to and its initial sequence number (x).

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

The server (Host B) acknowledges its own SYN segment, including the servers initial sequence
number (y). The server also responds to the client SYN by accepting the sender's SYN plus
one (X + 1).

An SYN consumes one sequence number. The client should acknowledge this SYN from the
server by accepting the server's SEQ plus one (SEQ = x + 1, ACK = y + 1). This is how a TCP
connection is settled.

Connection Termination Protocol (Connection Release)

While it creates three segments to establish a connection, it takes four segments to terminate a
connection. During a TCP connection is full-duplex (that is, data flows in each direction
independently of the other direction), each direction should be shut down alone.

The termination procedure for each host is shown in the figure. The rule is that either end can
share a FIN when it has finished sending data.

When a TCP receives a FIN, it should notify the application that the other end has terminated
that data flow direction. The sending of a FIN is usually the result of the application issuing a
close.

The receipt of a FIN only means that there will be no more data flowing in that direction. A
TCP can send data after receiving a FIN. The end that first issues the close (example, send the
first FIN) executes the active close. The other end (that receives this FIN) manages the passive
close.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Sliding window concept for error Control: -


TCP protocol has methods for finding out corrupted segments, missing segments, out-of-order
segments and duplicated segments.

Error control in TCP is mainly done through the use of three simple techniques:

Checksum – Every segment contains a checksum field which is used to find corrupted
segments. If the segment is corrupted, then that segment is discarded by the destination TCP
and is considered lost.

Acknowledgement – TCP has another mechanism called acknowledgement to affirm that the
data segments have been delivered. Control segments that contain no data but have sequence
numbers will be acknowledged as well but ACK segments are not acknowledged.

Retransmission – When a segment is missing, delayed to deliver to a receiver, corrupted when


it is checked by the receiver then that segment is retransmitted again. Segments are
retransmitted only during two events: when the sender receives three duplicate
acknowledgements (ACK) or when a retransmission timer expires.

Retransmission after RTO: TCP always preserves one retransmission time-out (RTO) timer
for all sent but not acknowledged segments. When the timer runs out of time, the earliest
segment is retransmitted. Here no timer is set for acknowledgement. In TCP, the RTO value is
dynamic in nature and it is updated using the round-trip time (RTT) of segments. RTT is the
time duration needed for a segment to reach the receiver and an acknowledgement to be
received by the sender.

Retransmission after Three duplicate ACK segments: RTO method works well when the
value of RTO is small. If it is large, more time is needed to get confirmation about whether a
segment has been delivered or not. Sometimes one segment is lost and the receiver receives so
many out-of-order segments that they cannot be saved. In order to solve this situation, three
duplicate acknowledgement method is used and missing segment is retransmitted immediately
instead of retransmitting already delivered segment. This is a fast retransmission because it
makes it possible to quickly retransmit lost segments instead of waiting for timer to end.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Congestion Control and TCP timers: -


*Congestion Control
What is congestion?

A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.

Effects of Congestion

As delay increases, performance decreases.

If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms

Congestion Control is a mechanism that controls the entry of data packets into the network,
enabling a better use of a shared network infrastructure and avoiding congestive collapse.

Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the mechanism
to avoid congestive collapse in a network.

There are two congestion control algorithm which are as follows:

1. Leaky Bucket Algorithm


➢ The leaky bucket algorithm discovers its use in the context of network traffic shaping
or rate-limiting.
➢ A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
➢ This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
➢ The disadvantages compared with the leaky-bucket algorithm are the inefficient use of
available network resources.
➢ The large area of network resources such as bandwidth is not being used effectively.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.

Figure:1
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:

✓ When host wants to send packet, packet is thrown into the bucket.
✓ The bucket leaks at a constant rate, meaning the network interface transmits packets at
a constant rate.
✓ Bursty traffic is converted to a uniform traffic by the leaky bucket.
✓ In practice the bucket is a finite queue that outputs at a finite rate.

2. Token bucket Algorithm


➢ The leaky bucket algorithm has a rigid output design at an average rate independent of
the bursty traffic.
➢ In some applications, when large bursts arrive, the output is allowed to speed up. This
calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

➢ It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
➢ The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
➢ When tokens are shown, a flow to transmit traffic appears in the display of tokens.
➢ No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak
burst rate in good tokens in the bucket.

Need of token bucket Algorithm: -

The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So, in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

➢ In regular intervals tokens are thrown into the bucket. ƒ


➢ The bucket has a maximum capacity. ƒ
➢ If there is a ready packet, a token is removed from the bucket, and the packet is sent.
➢ If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,

In above figure we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure (B)
We see that three of the five packets have gotten through, but the other two are stuck waiting
for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The leaky bucket algorithm
controls the rate at which the packets are introduced in the network, but it is very conservative
in nature. Some flexibility is introduced in the token bucket algorithm. In the token bucket,
algorithm tokens are generated at each tick (up to a certain limit). For an incoming packet to
be transmitted, it must capture a token and the transmission takes place at the same rate. Hence
some of the busty packets are transmitted at the same rate if tokens are available and thus
introduces some amount of flexibility in the system.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate ρ – Token arrival


rate C – Capacity of the token bucket in byte

Let’s understand with an example,

Figure:2

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Congestion Control techniques: -


Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens.
The congestion control is handled either by the source or the destination.

Policies adopted by open loop congestion control –

Retransmission Policy:

It is the policy in which retransmission of the packets are taken care of. If the sender feels that
a sent packet is lost or corrupted, the packet needs to be retransmitted. This transmission may
increase the congestion in the network.

To prevent congestion, retransmission timers must be designed to prevent congestion and also
able to optimize efficiency.

Window Policy:

The type of window at the sender’s side may also affect the congestion. Several packets in the
Go-back-n window are re-sent, although some packets may be received successfully at the
receiver side. This duplication may increase the congestion in the network and make it worse.

Therefore, Selective repeat window should be adopted as it sends the specific packet that may
have been lost.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Discarding Policy:

A good discarding policy adopted by the routers is that the routers may prevent congestion and
at the same time partially discard the corrupted or less sensitive packages and also be able to
maintain the quality of a message.

In case of audio file transmission, routers can discard fewer sensitive packets to prevent
congestion and also maintain the quality of the audio file.

Acknowledgment Policy:

Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be used to
prevent congestion related to acknowledgment.

The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if it
has to send a packet or a timer expires.

Admission Policy:

In admission policy a mechanism should be used to prevent congestion. Switches in a flow


should first check the resource requirement of a network flow before transmitting it further. If
there is a chance of a congestion or there is a congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.

All the above policies are adopted to prevent congestion before it happens in the network.

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:

1. Backpressure:

Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving
data from above nodes. Backpressure is a node-to-node congestion control technique that

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

propagate in the opposite direction of data flow. The backpressure technique can be applied
only to virtual circuit where each node has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node
may be get congested due to slowing down of the output data flow. Similarly, 1st node may
get congested and inform the source to slow down.

2. Choke Packet Technique:

Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router
monitors its resources and the utilization at each of its output lines. Whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly
sends a choke packet to the source giving it feedback to reduce the traffic. The intermediate
nodes through which the packets have traveled are not warned about congestion.

3. Implicit Signaling:

In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example, when sender sends

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.

4. Explicit Signaling:

In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
a different packet as in case of choke packet technique.

Explicit signaling can occur in either forward or backward direction.

Forward Signaling: In forward signaling, a signal is sent in the direction of the congestion.
The destination is warned about congestion. The receiver in this case adopts policies to prevent
further congestion.

Backward Signaling: In backward signaling, a signal is sent in the opposite direction of the
congestion. The source is warned about congestion and it needs to slow down.

*TCP Timers
TCP uses several timers to ensure that excessive delays are not encountered during
communications. Several of these timers are elegant, handling problems that are not
immediately obvious at first analysis. Each of the timers used by TCP is examined in the
following sections, which reveal its role in ensuring data is properly sent from one connection
to another.

TCP implementation uses four timers –

*Retransmission Timer – To retransmit lost segments, TCP uses retransmission timeout


(RTO). When TCP sends a segment the timer starts and stops when the acknowledgment is
received. If the timer expires timeout occurs and the segment is retransmitted. RTO
(retransmission timeout is for 1 RTT) to calculate retransmission timeout we first need to
calculate the RTT (round trip time).

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

RTT three types –

1. Measured RTT(RTTm) – The measured round-trip time for a segment is the time
required for the segment to reach the destination and be acknowledged, although the
acknowledgement may include other segments.
2. Smoothed RTT(RTTs) – It is the weighted average of RTTm. RTTm is likely to
change and its fluctuation is so high that a single measurement cannot be used to
calculate RTO.
Initially -> No value
After the first measurement -> RTTs=RTTm
After each measurement -> RTTs= (1-t)*RTTs + t*RTTm
Note: t=1/8 (default if not given)
3. Deviated RTT(RTTd) – Most implementations do not use RTTs alone so RTT
deviated is also calculated to find out RTO.

Initially -> No value


After the first measurement -> RTTd=RTTm/2
After each measurement -> RTTd= (1-k)*RTTd + k*(RTTm-RTTs)
Note: k=1/4 (default if not given)

*Persistent Timer – To deal with a zero-window-size deadlock situation, TCP uses a


persistence timer. When the sending TCP receives an acknowledgment with a window size of
zero, it starts a persistence timer. When the persistence timer goes off, the sending TCP sends
a special segment called a probe. This segment contains only 1 byte of new data. It has a
sequence number, but its sequence number is never acknowledged; it is even ignored in
calculating the sequence number for the rest of the data. The probe causes the receiving TCP
to resend the acknowledgment which was lost.

*Keep Alive Timer – A keepalive timer is used to prevent a long idle connection between two
TCPs. If a client opens a TCP connection to a server transfers some data and becomes silent
the client will crash. In this case, the connection remains open forever. So, a keepalive timer is
used. Each time the server hears from a client, it resets this timer. The time-out is usually 2
hours. If the server does not hear from the client after 2 hours, it sends a probe segment. If there
is no response after 10 probes, each of which is 75 s apart, it assumes that the client is down
and terminates the connection.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

*Time Wait Timer – This timer is used during TCP connection termination. The timer starts
after sending the last Ack for 2nd FIN and closing the connection.

After a TCP connection is closed, it is possible for datagrams that are still making their way
through the network to attempt to access the closed port. The quiet timer is intended to prevent
the just-closed port from reopening again quickly and receiving these last datagrams.

The quiet timer is usually set to twice the maximum segment lifetime (the same value as the
Time-To-Live field in an IP header), ensuring that all segments still heading for the port have
been discarded.

Multiplexing & Demultiplexing: -


Multiplexing and Demultiplexing services are provided in almost every protocol architecture
ever designed. UDP and TCP perform the demultiplexing and multiplexing jobs by including
two special fields in the segment headers: the source port number field and the destination port
number field.

Multiplexing –

Gathering data from multiple application processes of the sender, enveloping that data with a
header, and sending them as a whole to the intended receiver is called multiplexing.

Demultiplexing –

Delivering received segments at the receiver side to the correct app layer processes is called
demultiplexing.

Figure – Abstract view of multiplexing and demultiplexing

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Multiplexing and demultiplexing are the services facilitated by the transport layer of the OSI
model.

Figure – Transport layer- junction for multiplexing and demultiplexing

There are two types of multiplexing and Demultiplexing:


➢ Connectionless Multiplexing and Demultiplexing
➢ Connection-Oriented Multiplexing and Demultiplexing

How Multiplexing and Demultiplexing is done –

For sending data from an application on the sender side to an application at the destination
side, the sender must know the IP address of the destination and port number of the
application (at the destination side) to which he wants to transfer the data. Block diagram is
shown below:

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Figure – Transfer of packet between applications of sender and receiver

Let us consider two messaging apps that are widely used nowadays viz. Hike and WhatsApp.
Suppose A is the sender and B is the receiver. Both sender and receiver have these applications
installed in their system (say smartphone). Suppose A wants to send messages to B in
WhatsApp and hike both. In order to do so, A must mention the IP address of B and destination
port number of the WhatsApp while sending the message through the WhatsApp application.
Similarly, for the latter case, A must mention the IP address of B and the destination port
number of the hike while sending the message.

Now the messages from both the apps will be wrapped up along with appropriate headers (viz.
source IP address, destination IP address, source port no, destination port number) and sent as
a single message to the receiver. This process is called multiplexing. At the destination, the
received message is unwrapped and constituent messages (viz messages from a hike and
WhatsApp application) are sent to the appropriate application by looking to the destination the
port number. This process is called demultiplexing. Similarly, B can also transfer the messages
to A.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Figure – Message transfer using WhatsApp and hike messaging application

Calculation of TCP Checksum and Pseudo Header: -


When we receive data from the application it is broken into smaller data parts as the whole data
from the application cannot be sent through the network to the receiver host.

The protocol we use in OSI in the Transport layer is TCP. So, after breaking the data from the
application layer into smaller parts. This broken part forms the body of the TCP.

The TCP header usually varies from 20 Bytes (with no bits of option fields being used) to 60
Bytes (with all bits of options field being used).

It has fields like Source and Destination Port addresses, urgent pointer, Checksum, etc.

In this article, we are only concerned about the Checksum field of the TCP.

The Checksum of the TCP is calculated by taking into account the TCP Header, TCP body and
Pseudo IP header.

Now, the main ambiguity that arises that what is how can checksum be calculated on IP header
as IP comes into the picture in the layer below the Transport Layer.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

In simple terms, it means that we are in Transport Layer and the IP data packet is created in
Network Layer.

Then how can we estimate the size of the IP header from the Transport because the
guess/estimation would be definitely wrong and thus there would be no point in calculating the
checksum on a field which is wrong at the beginning itself?

The error checking capability of TCP/UDP in Transport Layer takes help from the network
layer for proper error detection.

But the important concept to note here is that we actually don’t use the IP header rather we use
a part of the IP header.

To overcome all these errors and increase error checking capability we use Pseudo IP header.

Pseudo IP header:
The pseudo-header is not an IP header rather it is a part of the IP header. We directly don’t
use the IP header because in IP header there are many which would be continuously changing
when then packets move along the network. Thus, a part of the IP header is taken into
account which don’t change as the IP packet moves in the network.

The Fields of the Pseudo IP header are: -


➢ IP of the Source
➢ IP of the Destination
➢ TCP/UDP segment Length
➢ Protocol (stating the type of the protocol used)
➢ Fixed of 8-bits
So, the total size of the pseudo header (12 Bytes) = IP of the Source (32 bits) + IP of the
Destination (32 bits) +TCP/UDP segment Length (16 bit) + Protocol (8 bits) + Fixed 8 bits

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

An important concept should be noted that this Pseudo header is created in the Transport layer
for calculation and after the calculation is done the Pseudo header is discarded. And the
checksum is calculated by using the usual checksum method.

So, this Pseudo Header is not transported across the network, rather the actual IP header which
is formed in Network Layer is transported.

So, the TCP checksum includes the: -

1. Pseudo IP header

2. TCP header

3. TCP body

After the calculation of the checksum using the above 3 fields, the checksum result is placed
in the checksum field of the TCP header.

As it is already stated that the Pseudo header is discarded and is not transported to the
destination host then how does the Destination host check if the data is received correctly or
not. Thus, the pseudo-header is once again created in the Transport layer of the Destination
host and then again, the checksum is calculated in the Transport Layer of Destination Host and
finally, the checksum is calculated by the usual method of checksum and is confirmed if the
data received is correct or not.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

Why IP header error checking two times is needed?

The IP header is checked two times the first time in the Transport layer and second time in
Network Layer. The IP header is checked two times because double checking ensures that any
error in the IP header can be detected with proper accuracy.

User Datagram Protocol (UDP): -


User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless
protocol. So, there is no need to establish a connection prior to data transfer. The UDP helps to
establish low-latency and loss-tolerating connections establish over the network. The UDP
enables process to process communication.

Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used
with most of the Internet services; provides assured delivery, reliability, and much more but all
these services cost us additional overhead and latency. Here, UDP comes into the picture. For
real-time services like computer gaming, voice or video communication, live conferences; we
need UDP. Since high performance is needed, UDP permits packets to be dropped instead of
processing delayed packets. There is no error checking in UDP, so it also saves bandwidth.

User Datagram Protocol (UDP) is more efficient in terms of both latency and bandwidth.

UDP Header –
UDP header is an 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to
60 bytes. The first 8 Bytes contains all necessary header information and the remaining part
consist of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish
different user requests or processes.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

➢ Source Port: Source Port is a 2 Byte long field used to identify the port number of the
source.
➢ Destination Port: It is a 2 Byte long field, used to identify the port of the destined packet.
➢ Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
➢ Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.

Applications of UDP:
• Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
• It is a suitable protocol for multicasting as UDP supports packet switching.
• UDP is used for some routing update protocols like RIP (Routing Information
Protocol).
• Normally used for real-time applications which cannot tolerate uneven delays between
sections of a received message.
• UDP is widely used in online gaming, where low latency and high-speed
communication is essential for a good gaming experience. Game servers often send
small, frequent packets of data to clients, and UDP is well suited for this type of
communication as it is fast and lightweight.
• Streaming media applications, such as IPTV, online radio, and video conferencing, use
UDP to transmit real-time audio and video data. The loss of some packets can be
tolerated in these applications, as the data is continuously flowing and does not require
retransmission.
• VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use UDP
for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

• DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
• DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the delay
caused by packet loss or retransmission is generally not critical for this application.

Following implementations uses UDP as a transport layer protocol:

• NTP (Network Time Protocol)


• DNS (Domain Name Service)
• BOOTP, DHCP.
• NNP (Network News Protocol)
• Quote of the day protocol
• TFTP, RTSP, RIP.
The application layer can do some of the tasks through UDP-
• Trace Route
• Record Route
• Timestamp
UDP takes a datagram from Network Layer, attaches its header, and sends it to the
user. So, it works fast.
Actually, UDP is a null protocol if you remove the checksum field.
➢ Reduce the requirement of computer resources.
➢ When using the Multicast or Broadcast to transfer.
➢ The transmission of Real-time packets, mainly in multimedia applications.
Advantages of UDP:
1. Speed: UDP is faster than TCP because it does not have the overhead of establishing a
connection and ensuring reliable data delivery.

2. Lower latency: Since there is no connection establishment, there is lower latency and faster
response time.

3. Simplicity: UDP has a simpler protocol design than TCP, making it easier to implement and
manage.

4. Broadcast support: UDP supports broadcasting to multiple recipients, making it useful for
applications such as video streaming and online gaming.

Prof. Hemlata Arya Department of CSE Subject Code: -150512


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR
(A Govt. Aided UGC Autonomous Institute Affiliated to RGPV, Bhopal)
NAAC Accredited with A++ Grade

5. Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce network
congestion and improve overall network performance.
Disadvantages of UDP:
1. No reliability: UDP does not guarantee delivery of packets or order of delivery, which can
lead to missing or duplicate data.

2. No congestion control: UDP does not have congestion control, which means that it can send
packets at a rate that can cause network congestion.

3. No flow control: UDP does not have flow control, which means that it can overwhelm the
receiver with packets that it cannot handle.

4. Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks, where an attacker


can flood a network with UDP packets, overwhelming the network and causing it to crash.

5. Limited use cases: UDP is not suitable for applications that require reliable data delivery,
such as email or file transfers, and is better suited for applications that can tolerate some data
loss, such as video streaming or online gaming.

UDP PSEUDO HEADER:


➢ the purpose of using a pseudo-header is to verify that the UDP packet has reached its
correct destination
➢ the correct destination consists of a specific machine and a specific protocol port
number within that machine
UDP pseudo header details:
• the UDP header itself specify only protocol port number. Thus , to verify the destination
UDP on the sending machine computes a checksum that covers the destination IP
address as well as the UDP packet.
• at the ultimate destination, UDP software verifies the checksum using the destination
IP address obtained from the header of the IP packet that carried the UDP message.
• if the checksum agrees, then it must be true that the packet has reached the intended
destination host as well as the correct protocol port within that host.

Prof. Hemlata Arya Department of CSE Subject Code: -150512

You might also like