Chapter 5
Chapter 5
The transport layer then passes the segment to the network layer at the sending
end system, where the segment is encapsulated within a network-layer packet (a
datagram) and sent to the destination. On the receiving side, the network layer
extracts the transport-layer segment from the datagram and passes the segment
up to the transport layer. The transport layer then processes the received
segment, making the data in the segment available to the receiving application.
Error Handling
Flow Control
Multiplexing
Connection Set-up and Release
Congestion Handling
Segmentation and Reassembly
Addressing
The ultimate goal of the transport layer is to provide efficient, reliable, and
cost-effective data transmission service to its users, normally processes in
the application layer.
The software and/or hardware within the transport layer that does the
work is called the transport entity.
Provide logical communication between application processes running on
different hosts.
Addressing:
SMTP = 25 DNS = 53
UDP Header
8 Bytes
As shown in figure, UDP segment structure contains five fields. A 2-byte source
port number, 2-byte destination port number, length field to specify total length
of UDP segment structure, checksum for error detection and application data field
for message transmission.
In the UDP segment structure, the UDP header contains 4 fields each of 2-bytes
as shown in previous figure.
1. Source port number: it contains the port number of the source process
that is sending the message.
3. Length: The length field specifies the number of bytes in the UDP segment
(header plus data).
5. Application Data Field: The application data occupies the data field of the
UDP segment. For a streaming audio application, audio samples fill the data
field.
Advantages of UDP:
• It uses small packet size with small header (8 bytes). This fewer bytes in the
overhead makes UDP protocol need less time in processing the packet and
need less memory.
Disadvantages of UDP:
• It does not use any error control. Hence if UDP detects any error in the
received packet, it silently drops it.
Even though TCP is the dominant protocol but due to its slow
performance, UDP is used where faster data transmission is required and little
data packet is tolerable. UDP is generally used in following situations:
1. For simple request and response communication where size of data is less
and hence there is lesser concern about flow and error control.
4. UDP is used for DNS querying. From host to DNS server the query is sent
through UDP protocol. If the query reaches to the DNS server a reply is
sent. If no reply is achieved by the host in a certain time interval, host
assumes that the query packet is lost. Hence, either it tries sending the
query to another name server, or it informs the invoking application that it
can’t get a reply. DNS will be very slow if it uses TCP instead of UDP.
5. UDP is used when the network layer is using RIP because it needs periodic
updation of routing table. Even if the updating packet is lost it does not
matter as RIP performs periodic updation so after some interval a new
updating packet will be sent and the previous loss of packet doesn’t make
any difference.
1. Source port:
A 16-bit field that contains the port number of the source process
that is sending the message.
2. Destination Port:
3. Sequence number:
A 32-bit number identifying the current position of the first data byte
in the segment within the entire byte stream for the TCP connection.
A 32-bit number identifying the next data byte the sender expects
from the receiver. Therefore, the number will be one greater than the most
recently received data byte.
A 4-bit field that specifies the total TCP header length in 32-bit
words. Without options the TCP header is always 20 bytes in length. The largest
TCP header may be is 60 bytes.
6. Reserved/Unused:
The flag field contains 6 bits. The ACK bit is used to indicate that the
value carried in the acknowledgment field is valid; that is, the segment contains
an acknowledgment for a segment that has been successfully received. The RST,
SYN, and FIN bits are used for connection setup and teardown. PSH bit indicates
that the receiver should pass the data to the upper layer immediately. Finally, the
URG bit is used to indicate that there is data in this segment that the sending-side
upper-layer entity has marked as “urgent.”
8. Window:
A 16-bit integer is used by TCP for flow control in the form of a data
transmission window size. This number tells the sender how much data the
receiver is willing to accept.
9. Checksum:
11. Options:
The minimum number of packets required for this exchange is three; hence,
this is called TCP's three-way handshake.
Data Transmission:
1. One application calls close first, and we say that this end performs the
active close. This end's TCP sends a FIN segment, which means it is
finished sending data.
2. The other end that receives the FIN performs the passive close. The
received FIN is acknowledged by TCP. The receipt of the FIN is also passed
to the application as an end-of-file (after any data that may have already
been queued for the application to receive), since the receipt of the FIN
means the application will not receive any additional data on the
connection.
3. Sometime later, the application that received the end-of-file will close its
socket. This causes its TCP to send a FIN.
4. The TCP on the system that receives this final FIN (the end that did the
active close) acknowledges the FIN.
TCP Operation:
TCP provides the following major services to the upper protocol layers:
• Host 1 acknowledges Host 2’s choice of sequence number in the first data
packet.
Connection Release:
• Asymmetric release can cause data loss if one side sends data that is not
received before the disconnect.
• Host 2 replies with a DR and starts a timer just in case the reply is lost.
• Host 1 will ACK the DR from Host 2 and release the connection.
• If the final ACK is lost, the time will take care of the disconnection.
Each process that wants to communicate with another process identifies itself to the TCP/IP
protocol suite by one or more ports.
Usually a service is associated with a port (e.g. http on port 80).
Sockets:
Flow Control:
Reliable Delivery
Connection-less Demultiplexing:
Too many packets present in the network cause packet delay and loss
that degrades performance. This situation is called congestion.
Congestion control refers to the mechanisms and techniques to control
the congestion and keep the load below the capacity.
Effects of Congestion
Open loop congestion control policies are applied to prevent congestion before it
happens. The congestion control is handled either by the source or the
destination. Policies adopted by open loop congestion control are:
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care.
If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. This transmission may increase the
congestion in the network. To prevent congestion, retransmission
2. Choke Packet:
A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke
packet methods. In backpressure, the warning is from one node to its
upstream node, although the warning may eventually reach the source
station. In the choke packet method, the warning is from the router, which
has encountered congestion, to the source station directly. The
intermediate nodes through which the packet has traveled are not warned.
Figure shows the idea of a choke packet.
3. Implicit Signaling:
In implicit signaling, there is no communication between the congested
node or nodes and the source. The source guesses that there is congestion
4. Explicit Signaling:
The node that experiences congestion can explicitly send a signal to the
source or destination. The explicit signaling method, however, is different
from the choke packet method. In the choke packet method, a separate
packet is used for this purpose; in the explicit signaling method, the signal is
included in the packets that carry data. Explicit signaling can occur in either
the forward or the backward direction.
Backward Signaling- A bit can be set in a packet moving in the direction
opposite to the congestion. This bit can warn the source that there is
congestion and that it needs to slow down to avoid the discarding of
packets.
Forward Signaling- A bit can be set in a packet moving in the direction of
the congestion. This bit can warn the destination that there is congestion.
The receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.
Also called traffic shaping algorithms, as they help regulate the data transmission
and reduce congestion.
Leaky Bucket:
This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream. Consider a bucket
with a small hole at the bottom, whatever may be the rate of water pouring into
the bucket, the rate at which water comes out from that small hole is constant.
When the host has to send a packet, the packet is thrown into the
bucket.
The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
Bursty traffic is converted to a uniform traffic by the leaky bucket.
If a packet arrives when the bucket is full, the packet must be
discarded.
A FIFO queue is used for holding the packets.
The leaky bucket algorithm described above, enforces a rigid pattern at the
output stream, irrespective of the pattern of the input. For many application it is
better to allow the output to speed up somewhat when a larger burst arrives than
to loose the data. Token bucket algorithm provides such a solution. In this
algorithm, bucket holds the token generated at regular intervals. Main steps of
this algorithm can be described as follows: