0% found this document useful (0 votes)
25 views27 pages

CN Unit 4

Uploaded by

mreccsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views27 pages

CN Unit 4

Uploaded by

mreccsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)

Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100


Department of Computer Science and Engineering

4th MODULE(CN)

MODULE IV: Transport Layer Connection Oriented and Connectionless Protocols -Process to
Process Delivery, UDP and TCP protocols, SCTP. Congestion Control - Data Traffic,
Congestion, Congestion Control, QoS, Integrated Services, Differentiated Services, QoS in
Switched Networks.

The transport layer is responsible for process-to-process delivery. The User Datagram
Protocol (UDP) is called a connectionless, unreliable transport protocol. It does not add
anything to the services of IP except to provide process-to-process communication instead of
host-to- host communication.

Process-to-Process Delivery: A transport-layer protocol's first task is to perform process-to-


process delivery. A process is an entity of the application layer which uses the services of the
transport layer. Two processes can be communicated between the client/server relationships.
Client/Server Paradigm
There are many ways to obtain the process-to-process communication, and the most common
way is through the client/server paradigm. A process is called a client on the local-host.
Usually, the remote host is needed services on the processes, that is called server. The same
name applies to both processes (client and server). IP address and port number combination
are called socket address, and that address defines a process and a host.
Multiplexing and Demultiplexing
Multiplexing: At the sender site, multiple processes can occur, and those processes are
required to send packets. It is a technique that combines multiple processes into one process.
Demultiplexing: At the receiver site, it is a technique that separates many processes.

UDP (User Datagram Protocol)

UDP was developed by David P Reed in 1980. It is a connection-less and unreliable protocol.
This means when the data transfer occurs; this protocol does not establish the connection
between the sender and receiver. The receiver does not send any acknowledgment of the
receiving data. It sends the data directly. In the UDP, the data packet is called datagram. UDP
does not guarantee your data that will reach its destination or not. It does not require that the
data reach the receiver in the same order in which the sender has sent the data.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Transmission Control Protocol

TCP stands for Transmission Control Protocol. It was introduced in 1974. It is a connection-
oriented and reliable protocol. It establishes a connection between the source and destination
device before starting the communication. It detects whether the destination device has
received the data sent from the source device or not. If the data received is not in the proper
format, it sends the data again. TCP is highly reliable because it uses a handshake and traffic
control mechanism. In TCP protocol, data is received in the same sequencer in which the
sender sends the data. We use the TCP protocol services in our daily life, such as HTTP,
HTTPS, Telnet, FTP, SMTP, etc.

Difference between UDP and TCP


UDP TCP

UDP stands for TCP stands for


User Datagram Transmission Control
Protocol. Protocol.

TCP establishes a
UDP sends the data
connection between the
directly to the
devices before transmitting
destination device.
the data.

It is a connection- It is a connection-oriented
less protocol. protocol.

UDP is faster than TCP is slower than the


the TCP protocol. UDP protocol.

It is an unreliable
It is a reliable protocol.
protocol.

It has not a
It has a sequence number
sequence number of
of each byte.
each byte.
User Datagram Protocol
User Datagram Protocol: UDP stands for the User Datagram Protocol. UDP was developed
by David P Reed in 1980. It is a connection-less and unreliable protocol. In this protocol,
when the data transfer occurs, it does not establish the connection between the sender and
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

receiver. It sends the data directly. The receiver does not send any acknowledgment of the
receiving data in this protocol. In the UDP, the data packet is called datagram.
UDP does not guarantee any user data, whether it will reach its destination or not. In this
protocol, it is not necessary that the data reach the receiver in the same sequencer in which the
sender has sent the data.
Importance of UDP

 The UDP protocol is used to transfer data where you need a higher speed than
accuracy and reliability.
 If the data flow is in the same direction, UDP is used.
 It is also used for streaming applications, for example, YouTube and online gaming.
 It provides faster data transfer speed than the TCP protocol.

User Datagram Protocol Format


The UDP format is very simple. The header size of the UDP is 8 bytes (8 bytes mean 64 bits).
The format of UDP is shown below in the figure.

It has four fields shown below:

 Source Port Number: The size of the source port is 16 bits. It is used to identify the
process of the sender.
 Destination Port Number: The size of the destination port is 16 bits. It is used to
identify the process of the receiver.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

 Total length: The size of the total length is 16 bits. It defines the total length of the
UDP and also stores the length of the data and header.

UDP length = IP length - IP header's length

 Checksum: The size of the checksum port is 16 bits. It is used to detect errors across
the entire user datagram.

Advantages of UDP

1. You can easily broadcast and multicast transmission through the UDP.
2. It is faster than TCP.
3. It uses the small size of the header (8 bytes).
4. It takes less memory than other protocols.
5. Whenever data packets need to be transmitted, then UDP is used.

Disadvantages of UDP

1. It is an unreliable protocol for transmission.


2. There is no such function in it to know that the data has been received or not.
3. The handshake method is not used in UDP.
4. It does not control the congestion.
5. The main disadvantage of using routers with UDP is that once transmission failure
occurs, the routers do not transmit the datagram again.

UDP is used in the following applications.

1. Domain name system.


2. Simple network management protocol.
3. Routing information protocol.
4. Trivial file transfer protocol.

Services of UDP and TCP

Services UDP TCP


Sequence data delivery No Yes
Multi-Streaming No No
Multi-Homing No No
Connection-Oriented No Yes
Connection-less Yes No
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Allows half-closed
N/A Yes
connection
Congestion Control No Yes
Selective Acknowledgements No Optional

Transmission Control Protocol

Transmission Control Protocol: TCP stands for Transmission Control Protocol. It was
introduced in 1974. It is a connection-oriented and reliable protocol. It establishes a
connection between the source and destination device before starting the communication. It
detects whether the destination device has received the data sent from the source device or
not. If the received data is not in the proper format, it sends the data again. TCP is highly
reliable because it uses a handshake and traffic control mechanism. In the TCP protocol, the
receiver receives the data in the same sequence in which the sender sends it. We use the TCP
protocol services in our daily life, such as HTTP, HTTPS, Telnet, FTP, SMTP, etc.

Importance of TCP

 The TCP protocol is used to transfer data where, you need accuracy and reliability
rather than the speed.
 Both TCP and UDP can check for errors, but only TCP can fix the error because it can
control the traffic and flow of data.

TCP Segment Format


MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

In the TCP, the data packet is called a segment. The size of the header in the segment is 20 to
60 bytes. The segment format of TCP is shown below the figure.

 Source Port Address: The size of the source port is 16 bits. It is used to define the
port address of the application that sends the segment.
 Destination Port Address: The size of the destination port is 16 bits. The destination
port is used to define the port address of the application that receives the segment.
 Sequence Number: The size of the sequence number is 32 bits. It defines the unique
number of the data in the segment.
 Acknowledgment number: The size of the acknowledgment number is 32 bits.
 Header Length: The size of the header length is 4 bits. It indicates the header of the
application. The header length can be lies between 20 and 60 bytes. Therefore, the
value of this field is 5 (5 × 4 = 20) and 15 (15 × 4 = 60).
 Reserved: The size of this field is 6 bits. This field is for future uses.
 Control Flag: The size of this field is 6 bits. It defines the six different control bits or
flags, as shown in the figure.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

 Window Size: The size of the window field is 16 bits. It defines the size of the
sending window of the sender.
 Checksum: The size of the checksum field is 16 bits. The checksum field is used for
error control. It is mandatory in TCP.
 Urgent Pointer: The size of the urgent pointer field is 16 bits, which is only required
when the URG flag is set. It is used for urgent data in the segment.
 Options and Padding: The size of options and padding field vary from 0 to 40 bytes.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Advantages of TCP

1. Retransmission: In the TCP, when a data fail, this protocol sends that data again after
a specific time.
2. TCP can fix the error because it can control the flow-control and congestion control.
3. TCP can easily detect the errors.
4. In the TCP protocol, the receiver receives the data in the same sequencer in which the
sender sends it.

Disadvantages of TCP

1. The data transfer speed of the TCP protocol is less than the UDP protocol.
2. TCP protocol cannot broadcast and multicast the message.

TCP message types


Message Description

Used to initiate and establish a connection. It also helps you to synchronize sequence
Syn
numbers between devices.

ACK Helps to confirm to the other side that it has received the SYN.

SYN-
SYN message from local device and ACK of the earlier packet.
ACK

FIN Used to terminate a connection.

TCP Three-Way Handshake Process

TCP traffic begins with a three-way handshake. In this TCP handshake process, a client needs to
initiate the conversation by requesting a communication session with the Server:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

3 way
Handshake Diagram

 Step 1: In the first step, the client establishes a connection with a server. It sends a
segment with SYN and informs the server about the client should start communication,
and with what should be its sequence number.
 Step 2: In this step server responds to the client request with SYN-ACK signal set. ACK
helps you to signify the response of segment that is received and SYN signifies what
sequence number it should able to start with the segments.
 Step 3: In this final step, the client acknowledges the response of the Server, and they
both create a stable connection will begin the actual data transfer process.

Real-world Example

Here is a simple example of the three-way handshake process that is consists of three steps:

 Host X begins the connection by sending the TCP SYN packet to its host destination. The
packets contain a random sequence number (For example, 4321) that indicates the
beginning of the sequence numbers for data that the Host X should transmit.
 After that, the Server will receive the packet, and it responds with its sequence number.
It’s response also includes the acknowledgment number, that is Host X’s sequence
number incremented with 1 (Here, it is 4322).
 Host X responds to the Server by sending the acknowledgment number that is mostly
server’s sequence number that is incremented by 1.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

After the data transmission process is over, TCP automatically terminates the connection
between two separate endpoints.

3-Way Handshake Connection Establishment Process

The following diagram shows how a reliable connection is established using 3-way handshake. It
will support communication between a web browser on the client and server sides whenever a
user navigates the Internet.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

3 -Way Handshake Closing Connection Process

To close a 3-way handshake connection,

 First, the client requests the server to terminate the established connection by sending
FIN.
 After receiving the client request, the server sends back the FIN and ACK request to the
client.
 After receiving the FIN + ACK from the server, the client confirms by sending an ACK
to the server.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

SCMP

Stream Control Transmission Protocol: SCTP stands for Stream Control Transmission
Protocol. SCTP was developed by the Internet Engineering Task Force (IETF). It is a reliable
message-oriented protocol of the transport layer. It provides the best features of the TCP and
UDP. It is designed for specific applications, such as multimedia.
SCTP Services

1. It provides process-to-process communication like UDP and TCP.


2. It allows the multi-stream service in every connection of device that is called
association. In the device, if one connection is blocked, the other connection may still
deliver that data.

 It offers a full-duplex service like TCP, where the data can flow both directions at the
same time.
 It uses the acknowledgment (ACK) mechanism to validate the data delivery.

SCTP Packet Format

The SCTP packet contains a general header and a group of blocks that is called chunks. There
are two types of chunks in the SCTP packet: control chunks and data chunks. The first
chunk of the packet maintains the association, and the second chunk of the packet
carries the user data. The control chunks arrive in the packet before the data chunks.

The general-header format of the SCTP is shown below.


MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

In the general header, there are four fields.

 Source Port Address: The size of the source port address is 16 bits. It defines the port
number of the process that is sending the packet.
 Destination Port address: The size of the destination port address is 16 bits. It
defines the port number of the process that is receiving the packet.
 Verification Tag: The size of the verification tag is 32 bits. This field is used to check
that the packet receives from the correct sender or not.
 Checksum: The size of the checksum is 32 bits.

Comparison of SCTP, TCP, and UDP


Services UDP TCP SCTP
Sequence data delivery No Yes Yes
Multi-Streaming No No Yes
Multi-Homing No No Yes
Connection-Oriented No Yes Yes
Connection-less Yes No No
Allows half-closed
N/A Yes No
connection
Congestion Control No Yes Yes
Partial reliable data
No No Optional
transfer
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Congestion

Congestion: If the network load is more than the network capacity, such a situation is called
the congestion.
Congestion control: It refers to the mechanisms that are used to control the congestion and
keep the traffic below the capacity of the network. Congestion control is divided into two
categories: Open-loop and Close-loop, which are shown below in the figure.

Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens. The
congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.

2. Window Policy :
The type of window at the sender’s side may also affect the congestion. Several packets in
the Go-back-n window are re-sent, although some packets may be received successfully at
the receiver side. This duplication may increase the congestion in the network and make it
worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost.

3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discard the corrupted or less sensitive packages and also be
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be used
to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if
it has to send a packet or a timer expires.

5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of a congestion or there is a congestion in the network, router should
deny establishing a virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the network.

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. This may cause the upstream node or nodes to become congested and reject receiving
data from above nodes. Backpressure is a node-to-node congestion control technique that
propagate in the opposite direction of data flow. The backpressure technique can be applied
only to virtual circuit where each node has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node
may get congested and inform the source to slow down.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

2. Choke Packet Technique :


Choke packet technique is applicable to both virtual networks as well as datagram subnets. A
choke packet is a packet sent by a node to the source to inform it of congestion. Each router
monitors its resources and the utilization at each of its output lines. Whenever the resource
utilization exceeds the threshold value which is set by the administrator, the router directly
sends a choke packet to the source giving it a feedback to reduce the traffic. The intermediate
nodes through which the packets has traveled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example when sender sends
several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
 Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case adopt
policies to prevent further congestion.
 Backward Signaling : In backward signaling, a signal is sent in the opposite direction of
the congestion. The source is warned about congestion and it needs to slow down.

What Techniques and Best Practices Are Involved in QoS?


Techniques
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

There are several techniques that businesses can use to guarantee the high performance of their
most critical applications. These include:

 Prioritization of delay-sensitive VoIP traffic via routers and switches: Many enterprise
networks can become overly congested, which sees routers and switches start dropping packets
as they come in and out faster than they can be processed. As a result, streaming applications
suffer. Prioritization enables traffic to be classified and receive different priorities depending on
its type and destination. This is particularly useful in a situation of high congestion, as packets
with higher priority can be sent ahead of other traffic.
 Resource reservation: The Resource Reservation Protocol (RSVP) is a transport layer protocol
that reserves resources across a network and can be used to deliver specific levels of QoS for
application data streams. Resource reservation enables businesses to divide network resources by
traffic of different types and origins, define limits, and guarantee bandwidth.
 Queuing: Queuing is the process of creating policies that provide preferential treatment to
certain data streams over others. Queues are high-performance memory buffers in routers and
switches, in which packets passing through are held in dedicated memory areas. When a packet
is assigned higher priority, it is moved to a dedicated queue that pushes data at a faster rate,
which reduces the chances of it being dropped. For example, businesses can assign a policy to
give voice traffic priority over the majority of network bandwidth. The routing or switching
device will then move this traffic’s packets and frames to the front of the queue and immediately
transmit them.
 Traffic marking: When applications that require priority over other bandwidth on a network
have been identified, the traffic needs to be marked. This is possible through processes like Class
of Service (CoS), which marks a data stream in the Layer 2 frame header, and Differentiated
Services Code Point (DSCP), which marks a data stream in the Layer 3 packet header.
Best Practices

In addition to these techniques, there are also several best practices that organizations should
keep in mind when determining their QoS requirements.

1. Ensure that maximum bandwidth limits at the source interface and security policy are not set too
low to prevent excessive packet discard.
2. Consider the ratio at which packets are distributed between available queues and which queues
are used by which services. This can affect latency levels, queue distribution, and packet
assignment.
3. Only place bandwidth guarantees on specific services. This will avoid the possibility of all traffic
using the same queue in high-volume situations.
4. Configure prioritization for all traffic through either type of service-based priority or security
policy priority, not both. This will simplify analysis and troubleshooting.
5. Try to minimize the complexity of QoS configuration to ensure high performance.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

6. To get accurate testing results, use the User Datagram Protocol (UDP), and do not oversubscribe
bandwidth throughput.

Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the availability of their
business-critical applications. It is vital for delivering differentiated bandwidth and ensuring data
transmission takes place without interrupting traffic flow or causing packet losses. Major
advantages of deploying QoS include:

1. Unlimited application prioritization: QoS guarantees that businesses’ most mission-critical


applications will always have priority and the necessary resources to achieve high performance.
2. Better resource management: QoS enables administrators to better manage the organization’s
internet resources. This also reduces costs and the need for investments in link expansions.
3. Enhanced user experience: The end goal of QoS is to guarantee the high performance of
critical applications, which boils down to delivering optimal user experience. Employees enjoy
high performance on their high-bandwidth applications, which enables them to be more effective
and get their job done more quickly.
4. Point-to-point traffic management: Managing a network is vital however traffic is delivered,
be it end to end, node to node, or point to point. The latter enables organizations to deliver
customer packets in order from one point to the next over the internet without suffering any
packet loss.
5. Packet loss prevention: Packet loss can occur when packets of data are dropped in transit
between networks. This can often be caused by a failure or inefficiency, network congestion, a
faulty router, loose connection, or poor signal. QoS avoids the potential of packet loss by
prioritizing bandwidth of high-performance applications.
6. Latency reduction: Latency is the time it takes for a network request to go from the sender to
the receiver and for the receiver to process it. This is typically affected by routers taking longer
to analyze information and storage delays caused by intermediate switches and bridges. QoS
enables organizations to reduce latency, or speed up the process of a network request, by
prioritizing their critical application.

Types of Network Traffic


Understanding how QoS network software works is reliant on defining the various types of
traffic that it measures. These are:

1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

2. Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a packet
waits in a queue before being transmitted. QoS enables organizations to avoid this by creating
a priority queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to network
congestion. QoS enables organizations to decide which packets to drop in this event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can result
in packets arriving late and out of sequence. This can cause distortion or gaps in audio and
video being delivered.

Getting Started with QoS


Implementing QoS begins with an enterprise identifying the types of traffic that are important
to them, use high volumes of bandwidth, and/or are sensitive to latency or packet loss.
This helps the organization understand the needs and importance of each traffic type on its
network and design an overall approach. For example, some organizations may only need to
configure bandwidth limits for specific services, whereas others may need to fully configure
interface and security policy bandwidth limits for all their services, as well as prioritize
queuing critical services relative to traffic rate.
The organization can then deploy policies that classify traffic and ensure the availability and
consistency of its most important applications. Traffic can be classified by port or internet
protocol (IP), or through a more sophisticated approach such as by application or user.
Bandwidth management and queuing tools are then assigned roles to handle traffic flow
specifically based on the classification they received when they entered the network. This
allows for packets within traffic flows to be stored until the network is ready to process them.
Priority queuing can also be used to ensure the necessary availability and minimal latency of
network performance for important applications and traffic. This is so that the network’s most
important activities are not starved of bandwidth by those of lesser priority.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Furthermore, bandwidth management measures and controls traffic flow on the network
infrastructure to ensure it does not exceed capacity and prevent congestion. This includes
using traffic shaping, a rate-limiting technique that optimizes or guarantees performance and
increases usable bandwidth, and scheduling algorithms, which offer several methods for
providing bandwidth to specific traffic flows.

Types of Congestion Control Algorithms

When too many packets are present in the network it causes packet delay and loss of packet
which degrades the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport
layer is placing on the network. To maintain this, network and transport layers have to work
together.

With too much traffic, performance drops sharply.


Types of Congestion Control Algorithms
There are two types of Congestion control algorithms, which are explained below −
Leaky Bucket Algorithm
It mainly controls the total amount and the rate of the traffic sent to the network.
Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at which
water is poured into the bucket is not constant and can vary but it leaks from the bucket at a
constant rate.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Step 2 − So (up to water is present in the bucket), the rate at which the water leaks does not
depend on the rate at which the water is input to the bucket.
Step 3 − If the bucket is full, additional water that enters into the bucket that spills over the
sides and is lost.
Step 4 − Thus the same concept applied to packets in the network. Consider that data is
coming from the source at variable speeds. Suppose that a source sends data at 10 Mbps for 4
seconds. Then there is no data for 3 seconds. The source again transmits data at a rate of 8
Mbps for 2 seconds. Thus, in a time span of 8 seconds, 68 Mb data has been transmitted.
That’s why a leaky bucket algorithm is used. The data flow would be 8 Mbps for 9 seconds.
Thus, the constant flow is maintained.

Token bucket algorithm


Token bucket algorithms are used to overcome the problems that we are facing using leaky
bucket algorithms. The leaky bucket algorithm’s main problem is that it cannot control burst
data, as it only allows average rate i.e. constant rate of data flow and also it does not consider
idle time of host.
Step 1 − For example, if the host was idle for 12 seconds and now it is willing to send data at
a very high speed for another 12 seconds, the total data transmission will be divided into 24
seconds and average data rate will be maintained.
Step 2 − The host has no advantage of sitting idle for 12 seconds. Thus, we adopted a token
bucket algorithm.
Step 3 − Hence, the token bucket algorithm is modification of leaky bucket in which leaky
bucket contains tokens.
Step 4 − In this a token(s) are generated at every clock tick. For every packet to be
transmitted, the system must remove token(s) from the bucket. Thus, the token bucket
algorithm allows idle hosts to accumulate credit for the future in the form of tokens.
For example, if a system generates 10 tokens in one clock tick and the host is idle for 10 ticks.
The bucket will contain 10, 00 tokens. Now, suppose the host wants to send burst data, it
consumes all 10, 00 tokens at once for sending 10, 00 cells or bytes. Thus, the host can be able
to send burst data as long as the bucket is not empty.

What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Effects of Congestion
 As delay increases, performance decreases.
 If delay increases, retransmission occurs, making situation worse.
Congestion control algorithms
 Leaky Bucket Algorithm
Let us consider an example to understand
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters the
bucket, the outflow is at constant rate.When the bucket is full with water additional water
entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
 Token bucket Algorithm
Need of token bucket Algorithm:-
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,
In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

(B) We see that three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
Ways in which token bucket is superior to leaky bucket:
The leaky bucket algorithm controls the rate at which the packets are introduced in the
network, but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and the
transmission takes place at the same rate. Hence some of the busty packets are transmitted at
the same rate if tokens are available and thus introduces some amount of flexibility in the
system.
Formula: M*s=C+ρ*s
whereS –is timetaken
M – Maximum output rate
ρ – Token arrival rate
C – Capacity of the token bucket in byte
Let’s understand with an example,

What is Integrated Services


Integrated services refer to an architecture that ensures the Quality of Service (QoS) on a
network. Moreover, these services allow the receiver to watch and listen to video and sound
without any interruption. Each router in the network implements integrated services.
Furthermore, every application requires some kind of guarantee to make an individual
reservation.
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Furthermore, it is possible to implement the integrated service structure through signalling


protocol and admission control routine, classifier and packet scheduler. Moreover, these services
require an explicit signalling mechanism to convey information to routers so that they can
provide the requested resources.

IntServ was defined in IETF RFC 1633, which proposed the resource reservation protocol
(RSVP) as a working protocol for signaling in the IntServ architecture. This protocol assumes
that resources are reserved for every flow requiring QoS at every router hop in the path between
receiver and transmitter using end-to-end signaling.

The IntServ model for IP QoS architecture defines three classes of service based on
applications’delay requirements (from highest performance to lowest):

· Guaranteed-service class - with bandwidth, bounded delay, and no-loss guarantees;

· Controlled-load service class - approximating best-effort service in a lightly loaded


network, which provides for a form of statistical delay service agreement (nominal delay)
that will not be violated more often than in an unloaded network;

· Best-effort service class - similar to that which the Internet currently offers, which is
further partitioned into three categories:
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

· interactive burst (e.g., Web),


· interactive bulk (e.g., FTP) and
· asynchronous (e.g., e-mail)

What is Differentiated Services


Differentiated services refer to a multiple service model that can satisfy many requirements. In
other words, it supports multiple mission-critical applications. Moreover, these services help to
minimize the burden of the network devices and also support the scaling of the network. Some
major differentiated services are as follows.

Traffic conditioning – Ensures that the traffic entering the DiffServ domain.
Packet classification – Categorizes the packet within a specific group using the traffic descriptor.

Packet marking – Classify a packet based on a specific traffic descriptor.

Congestion Management – Achieve queuing and traffic scheduling.

Congestion avoidance – Monitor traffic loads to minimize congestion. In involves packet


dropping.

Difference Between Integrated Services and Differentiated Services


Definition
Integrated services refer to an architecture that specifies the elements to guarantee Quality of
Service (QoS) on a network while differentiated Services is a computer networking architecture
that specifies a simple and scalable mechanism for classifying and managing network traffic and
providing QoS on modern IP networks.

Functionality
Integrated Services involve prior reservation of resources before achieving the requisite quality
of service. On the other hand, differential services mark the packets with priority and send it to
the network without prior reservation. Thus, their functionality is the main difference between
integrated services and differential services.

Scalability
Moreover, integrated services are not scalable, while differentiated services are scalable.

Setup
Also, another difference between integrated services and differential services is that integrated
services involve per-flow setup while differential services involve long term setup.

Service scope
MALLA REDDY ENGINEERING COLLEGE (AUTONOMOUS)
Maisammaguda, Dhulapally (post via kompally), Secunderabad – 500100
Department of Computer Science and Engineering

Furthermore, integrated services involve end to end service scope, whereas differentiated
services involve domain service scope.
Conclusion
In brief, integrated and differentiated are two services of QoS. The main difference between
integrated services and differentiated services is that integrated services involve prior reservation
of resources before achieving the required Quality of Service while differential services mark the
packets with priority and send it to the network and do not require prior reservation.

You might also like