Congestion Control and Quality of Service
Congestion Control and Quality of Service
24.1
Congestion Control and
Quality of Service
24.2
24-1 DATA TRAFFIC
24.3
24-2 CONGESTION
24.4
Congestion Control Introduction:
Congestion control and flow control are often
confused but both helps reduce congestion.
Congestion control is a global issue – involves
every router and host within the subnet
Flow control – scope is point-to-point; involves
just sender and receiver.
General Principles of Congestion Control
Three Step approach to apply congestion control:
1. Monitor the system .
detect when and where congestion occurs.
24.8
Figure 24.5 Congestion control categories
24.9
They further divide the open loop algorithms into ones
that act at the source versus ones that act at the
destination.
The closed-loop algorithms are also divided into two
subcategories:
• Explicit feedback
• Implicit feedback.
24.10
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be
retransmitted.
Retransmission in general may increase congestion in the
network. However, a good retransmission policy can prevent
congestion.
The retransmission policy and the retransmission timers must
be designed to optimize efficiency and at the same time
prevent congestion. For example, the retransmission policy
used by TCP (explained later) is designed to prevent or
alleviate congestion.
24.11
The acknowledgment policy imposed by the
receiver may also affect congestion.
• If the receiver does not acknowledge every
packet it receives, it may slow down the sender
and help prevent congestion.
• Several approaches are used in this case.
• A receiver may send an acknowledgment
only if it has a packet to be sent or a special
timer expires.
• A receiver may decide to acknowledge only
N packets at a time. We need to know that the
acknowledgments are also part of the load in
a network. Sending fewer acknowledgments
means imposing less load on the network.
24.12
Disregarding Policy/ Load Shedding
When buffers become full, routers simply discard
packets.
Which packet is chosen to be the victim depends
on the application and on the error strategy used in
the data link layer.
For a file transfer, for, e.g. cannot discard older
packets since this will cause a gap in the received
data.
For real-time voice or video it is probably better to
throw away old data and keep new packets.
Get the application to mark packets with discard
priority.
13
Warning Bit or Backpressure:
DECNET(Digital Equipment Corporation to connect
mini computers) architecture signaled the warning
state by setting a special bit in the packet's header.
The source then cut back on traffic.
The source monitored the fraction of
acknowledgements with the bit set and adjusted its
transmission rate accordingly.
As long as the warning bits continued to flow in, the
source continued to decrease its transmission rate.
When they slowed to a trickle, it increased its
transmission rate.
Disadvantage: Note that since every router along the
path could set the warning bit, traffic increased only
when no router was in trouble.
Figure 24.6 Backpressure method for alleviating congestion
24.15
Choke Packets
A more direct way of telling the source to
slow down.
A choke packet is a control packet
generated at a congested node and
transmitted to source to restrict traffic flow.
The source, on receiving the choke packet
must reduce its transmission rate by a
certain percentage.
An example of a choke packet is the ICMP
Source Quench Packet.
16
Figure 24.7 Choke packet
24.17
Hop-by-Hop Choke Packets
This technique is an advancement over the Choked packet
method. At high speed over long distances, sending a packet
back to the source doesn’t help much, because by the time the
choke packet reaches the source, already a lot of packets
destined for the same original the destination would be out
from the source.
So, to help this, Hop-by-Hop Choke packets are used. Over
long distances or at high speeds choke packets are not very
effective. A more efficient the method is to send choke packets
hop-by-hop.
This requires each hop to reduce its transmission even before
the choke packet arrives at the source.
18
Hop-by-Hop Choke Packets
19
24-6 TECHNIQUES TO IMPROVE QoS
Scheduling
Priority Weighted
FIFO RED
Queue Fair Queue
24.21
Random Early Discard (RED)
This is a proactive approach in which the
router discards one or more packets before
the buffer becomes completely full.
Each time a packet arrives, the RED
algorithm computes the average queue
length, avg.
If avg is lower than some lower threshold,
congestion is assumed to be minimal or
non-existent and the packet is queued.
22
RED, cont.
If avg is greater than some upper threshold,
congestion is assumed to be serious and the
packet is discarded.
If avg is between the two thresholds, this
might indicate the onset of congestion. The
probability of congestion is then calculated.
23
24.24
Traffic Shaping
Another method of congestion control is to
“shape” the traffic before it enters the
network.
Traffic shaping controls the rate at which
packets are sent (not just how many). Used
in ATM and Integrated Services networks.
At connection set-up time, the sender and
carrier negotiate a traffic pattern (shape).
Two traffic shaping algorithms are:
Leaky Bucket
Token Bucket
25
Traffic Shaping
Traffic
Shaping
Leaky Token
Bucket Bucket
24.26
The Leaky Bucket Algorithm
27
The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.
28
Leaky Bucket Algorithm, cont.
The leaky bucket enforces a constant output rate
(average rate) regardless of the burstiness of the input.
Does nothing when input is idle.
The host injects one packet per clock tick onto the
network. This results in a uniform flow of packets,
smoothing out bursts and reducing congestion.
When packets are the same size (as in ATM cells), the
one packet per tick is okay. For variable length packets
though, it is better to allow a fixed number of bytes per
tick. E.g. 1024 bytes per tick will allow one 1024-byte
packet or two 512-byte packets or four 256-byte packets
on 1 tick.
29
Figure 24.20 Leaky bucket implementation
24.30
Note
24.31
Token Bucket Algorithm
In contrast to the LB, the Token Bucket
Algorithm, allows the output rate to vary,
depending on the size of the burst.
In the TB algorithm, the bucket holds tokens. To
transmit a packet, the host must capture and
destroy one token.
Tokens are generated by a clock at the rate of one
token every t sec.
Idle hosts can capture and save up tokens (up to
the max. size of the bucket) in order to send larger
bursts later.
32
The Token Bucket Algorithm
5-34
34
Figure 24.21 Token bucket
24.35