0% found this document useful (0 votes)
2 views

Unit4_CongestionControl

The document discusses congestion control and quality of service (QoS) in data traffic, focusing on mechanisms to prevent and manage network congestion. It outlines various congestion control techniques, including open-loop and closed-loop methods, as well as TCP congestion control strategies like slow start and congestion avoidance. Additionally, it covers QoS improvement techniques such as scheduling, traffic shaping, admission control, and resource reservation.

Uploaded by

22uec093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit4_CongestionControl

The document discusses congestion control and quality of service (QoS) in data traffic, focusing on mechanisms to prevent and manage network congestion. It outlines various congestion control techniques, including open-loop and closed-loop methods, as well as TCP congestion control strategies like slow start and congestion avoidance. Additionally, it covers QoS improvement techniques such as scheduling, traffic shaping, admission control, and resource reservation.

Uploaded by

22uec093
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Congestion Control and

Quality of Service
DATA TRAFFIC

The main focus of congestion control and quality of


service is data traffic. In congestion control we try to
avoid traffic congestion. In quality of service, we try to
create an appropriate environment for the traffic. So,
before talking about congestion control and quality of
service, we discuss the data traffic itself.

Topics discussed in this section:


Traffic Descriptor
Traffic Profiles
Fig. 1 Traffic descriptors (Data rate regulation with time)

Average data rate: Sustained data transmission rate allowed by the network.
Peak data rate: Maximum allowable data rate for short burst size. If the data rate
exceeds this value, it may be dropped or delayed by network mechanisms (like
traffic shapers or policers).
Maximum burst size: This defines how long a sender is allowed to transmit at the
peak data rate above the average. It allows temporary bursts, as shown by the
hump in the graph.
Fig. 2 Three traffic profiles
CONGESTION

Congestion in a network may occur if the load on the


network—the number of packets sent to the network—
is greater than the capacity of the network—the
number of packets a network can handle. Congestion
control refers to the mechanisms and techniques to
control the congestion and keep the load below the
capacity.
Congestion Control
◼ When too many packets are present in (a part of) the subnet,
performance degrades. This situation is called congestion.
◼ As traffic increases too far, the routers are no longer able to cope
and they begin losing packets.
◼ At very high traffic, performance collapses completely and almost
no packets are delivered.
◼ Reasons of Congestion:
◼ Slow Processors.
◼ High stream of packets sent from one of the sender.
◼ Insufficient memory.
◼ Low bandwidth lines.
◼ Then what is congestion control?
◼ Congestion control has to do with making sure the subnet is able to
carry the offered traffic.
◼ Congestion control and flow control are often confused but both
helps reduce congestion.
CONGESTION CONTROL

Congestion control refers to techniques and mechanisms


that can either prevent congestion, before it happens, or
remove congestion, after it has happened. In general, we
can divide congestion control mechanisms into two broad
categories: open-loop congestion control (prevention) and
closed-loop congestion control (removal).

Topics discussed in this section:


Open-Loop Congestion Control
Closed-Loop Congestion Control
Fig. 3 Congestion control categories
Open Loop Congestion Control
1.Retransmission Policy :
Retransmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.

2.Window Policy :
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific
packet that may have been lost.

3.Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packets and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to
prevent congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet.

The receiver should send an acknowledgment only if it has to send a packet or a


timer expires.

5. Admission Policy :
Switches in a flow should first check the resource requirement of a network flow
before transmitting it further.

If there is a chance of a congestion or there is a congestion in the network, router


should deny establishing a virtual network connection to prevent further
congestion.
Closed Loop (Congestion removal
policies)
◼ Backpressure
◼ Choke Packets
◼ Implicit Signaling
◼ Explicit Signaling
1. Backpressure:
◼ Backpressure is a technique in which a congested node stops receiving packets
from upstream node.
◼ This may cause the upstream node or nodes to become congested and reject
receiving data from above nodes.
◼ Backpressure is a node-to-node congestion control technique that propagate in the
opposite direction of data flow.
◼ The backpressure technique can be applied only to virtual circuit where each node
has information of its above upstream node.

Fig. 4 Backpressure method for alleviating congestion


2. Choke Packets:
◼ Choke packet technique is applicable to both virtual networks as well as datagram
subnets.
◼ A choke packet is a packet sent by a node to the source to inform it of congestion.
◼ Each router monitors its resources and the utilization at each of its output lines.
Whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a
feedback to reduce the traffic.
◼ The intermediate nodes through which the packets has traveled are not warned
about congestion.

Fig. 5 Choke packet


3. Implicit Signaling
◼ In implicit signaling, there is no communication between the congested nodes and
the source.
◼ The source guesses that there is congestion in a network.
◼ For example when sender sends several packets and there is no acknowledgment
for a while, one assumption is that there is a congestion.

4. Explicit Signaling
In explicit signaling, if a node experiences congestion it can explicitly send a packet to
the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling, a signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this case
adopt policies to prevent further congestion.

• Backward Signaling : In backward signaling, a signal is sent in the opposite


direction of the congestion. The source is warned about congestion and it needs to
slow down.
Congestion Control in TCP

To better understand the concept of congestion


control, let us take example TCP congestion control

1. Slow Start: Exponential Increase


2. Congestion Avoidance: Additive Increase
3. Fast Recovery
Slow Start: Exponential Increase
The slow-start algorithm is based on the idea that the size of the congestion window (cwnd)
starts with one maximum segment size (MSS), but it increases exponentially each time an
acknowledgment arrives. The MSS is a value negotiated during the connection establishment,
using an option of the same name.
Fig. 6 Slow start, exponential increase
Note

In the slow-start algorithm, the size of


the congestion window increases
exponentially until it reaches a
threshold.
Congestion avoidance, additive increase
TCP defines another algorithm called congestion avoidance, which increases the cwnd
additively instead of exponentially. When the size of the congestion window reaches the
slow-start threshold in the case where cwnd = i, the slow-start phase stops and the additive
phase begins. In this algorithm, each time the whole “window” of segments is acknowledged,
the size of the congestion window is increased by one. A window is the number of segments
transmitted during RTT.

Fig. 7 Congestion avoidance, additive increase


Note

In the congestion avoidance algorithm,


the size of the congestion window
increases additively until
congestion is detected.
Note

An implementation reacts to congestion


detection in one of the following ways:
❏ If detection is by time-out, a new slow
start phase starts.
❏ If detection is by three ACKs, a new
congestion avoidance phase starts.
Fig. 8 Congestion example
QUALITY OF SERVICE

Quality of service (QoS) is an internetworking issue


that has been discussed more than defined. We can
informally define quality of service as something a
flow seeks to attain.

Topics discussed in this section:


Flow Characteristics
Fig. 9 Flow characteristics
TECHNIQUES TO IMPROVE QoS

We briefly discuss four common methods: scheduling,


traffic shaping, admission control, and resource
reservation.

Topics discussed in this section:


Scheduling
Traffic Shaping
Resource Reservation
Admission Control
Scheduling
Scheduling is the process of determining the order in which packets are
transmitted from queues in a network device (like a router). Good scheduling
improves performance, fairness, and efficiency.

Scheduling

Priority Weighted
FIFO
Queue Fair Queue
Fig. 10 FIFO queue
Fig. 11 Priority queuing
Fig. 12 Weighted fair queuing
Traffic Shaping
Traffic Shaping is to control the rate at which packets are sent into the network.
It Smooth out bursty traffic, avoid congestion, and maintain a steady flow.

Traffic
Shaping

Leaky Token
Bucket Bucket
Fig. 13 Leaky bucket
Fig. 14 Leaky bucket implementation
Note

A leaky bucket algorithm shapes bursty


traffic into fixed-rate traffic by averaging
the data rate. It may drop the packets if
the bucket is full.
Note

The token bucket allows bursty traffic at


a regulated maximum rate.
Fig. 15 Token bucket
Resource Reservation and
Admission Control
◼ Buffer, CPU Time, Bandwidth are the
resources that can be reserved for particular
flows for particular time to maintain the QoS.
◼ Mechanism used by routers to accept or reject
flows based on flow specifications is what we
call Admission Control.

You might also like