CN Unit 4
CN Unit 4
4th MODULE(CN)
MODULE IV: Transport Layer Connection Oriented and Connectionless Protocols -Process to
Process Delivery, UDP and TCP protocols, SCTP. Congestion Control - Data Traffic,
Congestion, Congestion Control, QoS, Integrated Services, Differentiated Services, QoS in
Switched Networks.
The transport layer is responsible for process-to-process delivery. The User Datagram
Protocol (UDP) is called a connectionless, unreliable transport protocol. It does not add
anything to the services of IP except to provide process-to-process communication instead of
host-to- host communication.
UDP was developed by David P Reed in 1980. It is a connection-less and unreliable protocol.
This means when the data transfer occurs; this protocol does not establish the connection
between the sender and receiver. The receiver does not send any acknowledgment of the
receiving data. It sends the data directly. In the UDP, the data packet is called datagram. UDP
does not guarantee your data that will reach its destination or not. It does not require that the
data reach the receiver in the same order in which the sender has sent the data.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
TCP stands for Transmission Control Protocol. It was introduced in 1974. It is a connection-
oriented and reliable protocol. It establishes a connection between the source and destination
device before starting the communication. It detects whether the destination device has
received the data sent from the source device or not. If the data received is not in the proper
format, it sends the data again. TCP is highly reliable because it uses a handshake and traffic
control mechanism. In TCP protocol, data is received in the same sequencer in which the
sender sends the data. We use the TCP protocol services in our daily life, such as HTTP,
HTTPS, Telnet, FTP, SMTP, etc.
TCP establishes a
UDP sends the data
connection between the
directly to the
devices before transmitting
destination device.
the data.
It is a connection- It is a connection-oriented
less protocol. protocol.
It is an unreliable
It is a reliable protocol.
protocol.
It has not a
It has a sequence number
sequence number of
of each byte.
each byte.
User Datagram Protocol
User Datagram Protocol: UDP stands for the User Datagram Protocol. UDP was developed
by David P Reed in 1980. It is a connection-less and unreliable protocol. In this protocol,
when the data transfer occurs, it does not establish the connection between the sender and
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
receiver. It sends the data directly. The receiver does not send any acknowledgment of the
receiving data in this protocol. In the UDP, the data packet is called datagram.
UDP does not guarantee any user data, whether it will reach its destination or not. In this
protocol, it is not necessary that the data reach the receiver in the same sequencer in which the
sender has sent the data.
Importance of UDP
The UDP protocol is used to transfer data where you need a higher speed than
accuracy and reliability.
If the data flow is in the same direction, UDP is used.
It is also used for streaming applications, for example, YouTube and online gaming.
It provides faster data transfer speed than the TCP protocol.
Source Port Number: The size of the source port is 16 bits. It is used to identify the
process of the sender.
Destination Port Number: The size of the destination port is 16 bits. It is used to
identify the process of the receiver.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Total length: The size of the total length is 16 bits. It defines the total length of the
UDP and also stores the length of the data and header.
Checksum: The size of the checksum port is 16 bits. It is used to detect errors across
the entire user datagram.
Advantages of UDP
1. You can easily broadcast and multicast transmission through the UDP.
2. It is faster than TCP.
3. It uses the small size of the header (8 bytes).
4. It takes less memory than other protocols.
5. Whenever data packets need to be transmitted, then UDP is used.
Disadvantages of UDP
Allows half-closed
N/A Yes
connection
Congestion Control No Yes
Selective Acknowledgements No Optional
Transmission Control Protocol: TCP stands for Transmission Control Protocol. It was
introduced in 1974. It is a connection-oriented and reliable protocol. It establishes a
connection between the source and destination device before starting the communication. It
detects whether the destination device has received the data sent from the source device or
not. If the received data is not in the proper format, it sends the data again. TCP is highly
reliable because it uses a handshake and traffic control mechanism. In the TCP protocol, the
receiver receives the data in the same sequence in which the sender sends it. We use the TCP
protocol services in our daily life, such as HTTP, HTTPS, Telnet, FTP, SMTP, etc.
Importance of TCP
The TCP protocol is used to transfer data where, you need accuracy and reliability
rather than the speed.
Both TCP and UDP can check for errors, but only TCP can fix the error because it can
control the traffic and flow of data.
In the TCP, the data packet is called a segment. The size of the header in the segment is 20 to
60 bytes. The segment format of TCP is shown below the figure.
Source Port Address: The size of the source port is 16 bits. It is used to define the
port address of the application that sends the segment.
Destination Port Address: The size of the destination port is 16 bits. The destination
port is used to define the port address of the application that receives the segment.
Sequence Number: The size of the sequence number is 32 bits. It defines the unique
number of the data in the segment.
Acknowledgment number: The size of the acknowledgment number is 32 bits.
Header Length: The size of the header length is 4 bits. It indicates the header of the
application. The header length can be lies between 20 and 60 bytes. Therefore, the
value of this field is 5 (5 × 4 = 20) and 15 (15 × 4 = 60).
Reserved: The size of this field is 6 bits. This field is for future uses.
Control Flag: The size of this field is 6 bits. It defines the six different control bits or
flags, as shown in the figure.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Window Size: The size of the window field is 16 bits. It defines the size of the
sending window of the sender.
Checksum: The size of the checksum field is 16 bits. The checksum field is used for
error control. It is mandatory in TCP.
Urgent Pointer: The size of the urgent pointer field is 16 bits, which is only required
when the URG flag is set. It is used for urgent data in the segment.
Options and Padding: The size of options and padding field vary from 0 to 40 bytes.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Advantages of TCP
1. Retransmission: In the TCP, when a data fail, this protocol sends that data again after
a specific time.
2. TCP can fix the error because it can control the flow-control and congestion control.
3. TCP can easily detect the errors.
4. In the TCP protocol, the receiver receives the data in the same sequencer in which the
sender sends it.
Disadvantages of TCP
1. The data transfer speed of the TCP protocol is less than the UDP protocol.
2. TCP protocol cannot broadcast and multicast the message.
Used to initiate and establish a connection. It also helps you to synchronize sequence
Syn
numbers between devices.
ACK Helps to confirm to the other side that it has received the SYN.
SYN-
SYN message from local device and ACK of the earlier packet.
ACK
TCP traffic begins with a three-way handshake. In this TCP handshake process, a client needs to
initiate the conversation by requesting a communication session with the Server:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
3 way
Handshake Diagram
Step 1: In the first step, the client establishes a connection with a server. It sends a
segment with SYN and informs the server about the client should start communication,
and with what should be its sequence number.
Step 2: In this step server responds to the client request with SYN-ACK signal set. ACK
helps you to signify the response of segment that is received and SYN signifies what
sequence number it should able to start with the segments.
Step 3: In this final step, the client acknowledges the response of the Server, and they
both create a stable connection will begin the actual data transfer process.
Real-world Example
Here is a simple example of the three-way handshake process that is consists of three steps:
Host X begins the connection by sending the TCP SYN packet to its host destination. The
packets contain a random sequence number (For example, 4321) that indicates the
beginning of the sequence numbers for data that the Host X should transmit.
After that, the Server will receive the packet, and it responds with its sequence number.
It’s response also includes the acknowledgment number, that is Host X’s sequence
number incremented with 1 (Here, it is 4322).
Host X responds to the Server by sending the acknowledgment number that is mostly
server’s sequence number that is incremented by 1.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
After the data transmission process is over, TCP automatically terminates the connection
between two separate endpoints.
The following diagram shows how a reliable connection is established using 3-way handshake. It
will support communication between a web browser on the client and server sides whenever a
user navigates the Internet.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
First, the client requests the server to terminate the established connection by sending
FIN.
After receiving the client request, the server sends back the FIN and ACK request to the
client.
After receiving the FIN + ACK from the server, the client confirms by sending an ACK
to the server.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
SCMP
Stream Control Transmission Protocol: SCTP stands for Stream Control Transmission
Protocol. SCTP was developed by the Internet Engineering Task Force (IETF). It is a reliable
message-oriented protocol of the transport layer. It provides the best features of the TCP and
UDP. It is designed for specific applications, such as multimedia.
SCTP Services
It offers a full-duplex service like TCP, where the data can flow both directions at the
same time.
It uses the acknowledgment (ACK) mechanism to validate the data delivery.
The SCTP packet contains a general header and a group of blocks that is called chunks. There
are two types of chunks in the SCTP packet: control chunks and data chunks. The first
chunk of the packet maintains the association, and the second chunk of the packet
carries the user data. The control chunks arrive in the packet before the data chunks.
Source Port Address: The size of the source port address is 16 bits. It defines the port
number of the process that is sending the packet.
Destination Port address: The size of the destination port address is 16 bits. It
defines the port number of the process that is receiving the packet.
Verification Tag: The size of the verification tag is 32 bits. This field is used to check
that the packet receives from the correct sender or not.
Checksum: The size of the checksum is 32 bits.
Congestion
Congestion: If the network load is more than the network capacity, such a situation is called
the congestion.
Congestion control: It refers to the mechanisms that are used to control the congestion and
keep the traffic below the capacity of the network. Congestion control is divided into two
categories: Open-loop and Close-loop, which are shown below in the figure.
Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several packets in
the Go-back-n window are re-sent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion in the network and make it
worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discard the corrupted or less sensitive packages and also be
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be used
to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if
it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the network, router should
deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving
data from above nodes. Backpressure is a node-to-node congestion control technique that
propagate in the opposite direction of data flow. The backpressure technique can be applied
only to virtual circuit where each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node
may get congested and inform the source to slow down.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example when sender sends
several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case adopt
policies to prevent further congestion.
Backward Signaling : In backward signaling, a signal is sent in the opposite direction of
the congestion. The source is warned about congestion and it needs to slow down.
There are several techniques that businesses can use to guarantee the high performance of their
most critical applications. These include:
Prioritization of delay-sensitive VoIP traffic via routers and switches: Many enterprise
networks can become overly congested, which sees routers and switches start dropping packets
as they come in and out faster than they can be processed. As a result, streaming applications
suffer. Prioritization enables traffic to be classified and receive different priorities depending on
its type and destination. This is particularly useful in a situation of high congestion, as packets
with higher priority can be sent ahead of other traffic.
Resource reservation: The Resource Reservation Protocol (RSVP) is a transport layer protocol
that reserves resources across a network and can be used to deliver specific levels of QoS for
application data streams. Resource reservation enables businesses to divide network resources by
traffic of different types and origins, define limits, and guarantee bandwidth.
Queuing: Queuing is the process of creating policies that provide preferential treatment to
certain data streams over others. Queues are high-performance memory buffers in routers and
switches, in which packets passing through are held in dedicated memory areas. When a packet
is assigned higher priority, it is moved to a dedicated queue that pushes data at a faster rate,
which reduces the chances of it being dropped. For example, businesses can assign a policy to
give voice traffic priority over the majority of network bandwidth. The routing or switching
device will then move this traffic’s packets and frames to the front of the queue and immediately
transmit them.
Traffic marking: When applications that require priority over other bandwidth on a network
have been identified, the traffic needs to be marked. This is possible through processes like Class
of Service (CoS), which marks a data stream in the Layer 2 frame header, and Differentiated
Services Code Point (DSCP), which marks a data stream in the Layer 3 packet header.
Best Practices
In addition to these techniques, there are also several best practices that organizations should
keep in mind when determining their QoS requirements.
1. Ensure that maximum bandwidth limits at the source interface and security policy are not set too
low to prevent excessive packet discard.
2. Consider the ratio at which packets are distributed between available queues and which queues
are used by which services. This can affect latency levels, queue distribution, and packet
assignment.
3. Only place bandwidth guarantees on specific services. This will avoid the possibility of all traffic
using the same queue in high-volume situations.
4. Configure prioritization for all traffic through either type of service-based priority or security
policy priority, not both. This will simplify analysis and troubleshooting.
5. Try to minimize the complexity of QoS configuration to ensure high performance.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
6. To get accurate testing results, use the User Datagram Protocol (UDP), and do not oversubscribe
bandwidth throughput.
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the availability of their
business-critical applications. It is vital for delivering differentiated bandwidth and ensuring data
transmission takes place without interrupting traffic flow or causing packet losses. Major
advantages of deploying QoS include:
1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
2. Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a packet
waits in a queue before being transmitted. QoS enables organizations to avoid this by creating
a priority queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to network
congestion. QoS enables organizations to decide which packets to drop in this event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can result
in packets arriving late and out of sequence. This can cause distortion or gaps in audio and
video being delivered.
Furthermore, bandwidth management measures and controls traffic flow on the network
infrastructure to ensure it does not exceed capacity and prevent congestion. This includes
using traffic shaping, a rate-limiting technique that optimizes or guarantees performance and
increases usable bandwidth, and scheduling algorithms, which offer several methods for
providing bandwidth to specific traffic flows.
When too many packets are present in the network it causes packet delay and loss of packet
which degrades the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport
layer is placing on the network. To maintain this, network and transport layers have to work
together.
Step 2 − So (up to water is present in the bucket), the rate at which the water leaks does not
depend on the rate at which the water is input to the bucket.
Step 3 − If the bucket is full, additional water that enters into the bucket that spills over the
sides and is lost.
Step 4 − Thus the same concept applied to packets in the network. Consider that data is
coming from the source at variable speeds. Suppose that a source sends data at 10 Mbps for 4
seconds. Then there is no data for 3 seconds. The source again transmits data at a rate of 8
Mbps for 2 seconds. Thus, in a time span of 8 seconds, 68 Mb data has been transmitted.
That’s why a leaky bucket algorithm is used. The data flow would be 8 Mbps for 9 seconds.
Thus, the constant flow is maintained.
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.
Congestion control algorithms
Leaky Bucket Algorithm
Let us consider an example to understand
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters the
bucket, the outflow is at constant rate.When the bucket is full with water additional water
entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
Need of token bucket Algorithm:-
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,
In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
(B) We see that three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
Ways in which token bucket is superior to leaky bucket:
The leaky bucket algorithm controls the rate at which the packets are introduced in the
network, but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and the
transmission takes place at the same rate. Hence some of the busty packets are transmitted at
the same rate if tokens are available and thus introduces some amount of flexibility in the
system.
Formula: M*s=C+ρ*s
whereS –is timetaken
M – Maximum output rate
ρ – Token arrival rate
C – Capacity of the token bucket in byte
Let’s understand with an example,
IntServ was defined in IETF RFC 1633, which proposed the resource reservation protocol
(RSVP) as a working protocol for signaling in the IntServ architecture. This protocol assumes
that resources are reserved for every flow requiring QoS at every router hop in the path between
receiver and transmitter using end-to-end signaling.
The IntServ model for IP QoS architecture defines three classes of service based on
applications’delay requirements (from highest performance to lowest):
· Best-effort service class - similar to that which the Internet currently offers, which is
further partitioned into three categories:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Traffic conditioning – Ensures that the traffic entering the DiffServ domain.
Packet classification – Categorizes the packet within a specific group using the traffic descriptor.
Functionality
Integrated Services involve prior reservation of resources before achieving the requisite quality
of service. On the other hand, differential services mark the packets with priority and send it to
the network without prior reservation. Thus, their functionality is the main difference between
integrated services and differential services.
Scalability
Moreover, integrated services are not scalable, while differentiated services are scalable.
Setup
Also, another difference between integrated services and differential services is that integrated
services involve per-flow setup while differential services involve long term setup.
Service scope
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering
Furthermore, integrated services involve end to end service scope, whereas differentiated
services involve domain service scope.
Conclusion
In brief, integrated and differentiated are two services of QoS. The main difference between
integrated services and differentiated services is that integrated services involve prior reservation
of resources before achieving the required Quality of Service while differential services mark the
packets with priority and send it to the network and do not require prior reservation.