0% found this document useful (0 votes)
2 views

Lecture_Congestion Control

The document discusses congestion control in telecommunication networks, focusing on data traffic, congestion mechanisms, and performance metrics such as delay and throughput. It outlines open-loop and closed-loop mechanisms for managing congestion, including TCP's congestion control strategies like slow start, congestion avoidance, and congestion detection. The document emphasizes the importance of maintaining network capacity and describes various policies to prevent and alleviate congestion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture_Congestion Control

The document discusses congestion control in telecommunication networks, focusing on data traffic, congestion mechanisms, and performance metrics such as delay and throughput. It outlines open-loop and closed-loop mechanisms for managing congestion, including TCP's congestion control strategies like slow start, congestion avoidance, and congestion detection. The document emphasizes the importance of maintaining network capacity and describes various policies to prevent and alleviate congestion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Telecommunication Networks

Dr. Bhagirath Sahu


Assistant Professor, JIIT, Noida
CONGESTION CONTROL

Reference:

Kindly refer Book “B. A Forouzan, Data Communications


and Networking, 4th Edition TMH”
Chapter: 24 (page: 761 – 773)
DATA TRAFFIC
 The main focus of congestion control and quality of service is data
traffic.
 In congestion control we try to avoid traffic congestion.
 In quality of service, we try to create an appropriate environment for
the traffic.
 So, before talking about congestion control and quality of service, we
discuss the data traffic itself.

Topics discussed in this section:

 Traffic Descriptor
 Traffic Profiles
Traffic Descriptor
Traffic descriptors are qualitative values that represent a data flow.

Average Data Rate


The average data rate is the number of bits sent during a period of time, divided by
the number of seconds in that period.

Peak Data Rate


The peak data rate defines the maximum data rate of the traffic.

Maximum Burst Size


The maximum burst size normally refers to the maximum length of time the traffic is
generated at the peak rate.
Traffic Profiles

Three traffic profiles


Traffic Profiles
Constant Bit Rate
A constant-bit-rate (CBR), or a fixed-rate, traffic model has a data rate that
does not change.
Variable Bit Rate
In the variable-bit-rate (VBR) category, the rate of the data flow changes
in time, with the changes smooth instead of sudden and sharp.
Bursty
In the bursty data category, the data rate changes suddenly in a very
short time.
CONGESTION
 Congestion in a network may occur if the load on the network (the
number of packets sent to the network) is greater than the capacity
of the network (the number of packets a network can handle).

 Congestion control refers to the mechanisms and techniques to


control the congestion and keep the load below the capacity.
CONGESTION ON A NETWORK
Queues in a router

Congestion in a network or inter-network occurs because routers and switches have


queues-buffers that hold the packets before and after processing.
A router has an input queue and an output queue for each interface. When a packet
arrives at the incoming interface, it undergoes three steps before departing:

1. The packet is put at the end of the input queue while waiting to be checked.
2. The processing module of the router removes the packet from the input queue
once it reaches the front of the queue and uses its routing table and the
destination address to find the route.
3. The packet is put in the appropriate output queue and waits its tum to be sent.
Network Performance
Congestion control involves two factors that measure the performance of a
network: delay and throughput.
Network Performance
Delay Versus Load
Note that when the load is much less than the capacity of the network, the delay is
at a minimum. However, when the load reaches the network capacity, the delay
increases sharply because we now need to add the waiting time in the queues (for
all routers in the path) to the total delay. Note that the delay becomes infinite when
the load is greater than the capacity.

Throughput Versus Load


Throughput in a network as the number of packets passing through the network in
a unit of time. Notice that when the load is below the capacity of the network, the
throughput increases proportionally with the load. We expect the throughput to
remain constant after the load reaches the capacity, but instead the throughput
declines sharply. The reason is the discarding of packets by the routers. When the
load exceeds the capacity, the queues become full and the routers have to discard
some packets. Discarding packets does not reduce the number of packets in the
network because the sources retransmit the packets, using time-out mechanisms,
when the packets do not reach the destinations.
CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened.

In general, we can divide congestion control mechanisms into two broad


categories: open-loop congestion control (prevention) and closed-loop
congestion control (removal).
Open-Loop Mechanisms
• In open-loop congestion control, policies are applied to prevent congestion before it
happens.
• In these mechanisms, congestion control is handled by either the source or the
destination.
• We give a brief list of policies that can prevent congestion.

Retransmission Policy
• If the sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent congestion.

Window Policy
• The Selective Repeat window is better than the Go-Back-N window for congestion
control. The Selective Repeat window tries to send the specific packets that have been
lost or corrupted.
Open-Loop Mechanisms
Acknowledgment Policy
• If the receiver does not acknowledge every packet it receives, it may slow down
the sender and help prevent congestion. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide
to acknowledge only N packets at a time. Sending fewer acknowledgments means
imposing less load on the network.

Discarding Policy
• A good discarding policy by the routers may prevent congestion and at the same
time may not harm the integrity of the transmission.

Admission Policy
• Switches in a flow first check the resource requirement of a flow before
admitting it to the network. A router can deny establishing a virtual circuit
connection if there is congestion in the network or if there is a possibility of
future congestion.
Closed-Loop Mechanisms
Backpressure

Figure: Backpressure method for alleviating congestion

 The technique of backpressure refers to a congestion control mechanism in which a


congested node stops receiving data from the immediate upstream node or nodes.
 Backpressure is a node-to-node congestion control that starts with a node and
propagates, in the opposite direction of data flow, to the source.
 Node III in the figure has more input data than it can handle. It drops some packets in
its input buffer and informs node II to slow down.
 Node II, in turn, may be congested because it is slowing down the output flow of data.
If node II is congested, it informs node I to slow down, which in turn may create
congestion. If so, node I informs the source of data to slow down.
 This, in time, alleviates the congestion.
Closed-Loop Mechanisms
Choke Packet

• A choke packet is a packet sent by a node to the source to inform it of congestion.

Difference between the backpressure and choke packet methods:


• In backpressure, the warning is from one node to its upstream node, although the
warning may eventually reach the source station.
• In the choke packet method, the warning is from the router, which has encountered
congestion, to the source station directly. The intermediate nodes through which the
packet has travelled are not warned.
Closed-Loop Mechanisms
Implicit Signaling
• In implicit signaling, there is no communication between the congested node or nodes
and the source. The source guesses that there is a congestion somewhere in the
network from other symptoms.

Explicit Signaling
• The node that experiences congestion can explicitly send a signal to the source or
destination.
• The explicit signaling method, however, is different from the choke packet method.
• In the choke packet method, a separate packet is used for this purpose; in the explicit
signaling method, the signal is included in the packets that carry data.
Explicit signaling can occur in either the forward or the backward direction.
• Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to
slow down to avoid the discarding of packets.
• Forward Signaling A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is congestion. The receiver in
this case can use policies, such as slowing down the acknowledgments, to alleviate the
congestion.
Congestion Control in TCP
 TCP uses congestion control to avoid congestion or alleviate congestion in
the network.

Congestion Window (cwnd)


 Sender's window size is determined not only by the receiver but also by
congestion in the network.

 The sender has two pieces of information: the receiver-advertised window


size and the congestion window size. The actual size of the window is the
minimum of these two.

 Actual window size = minimum (rwnd, cwnd)


Congestion Control in TCP
Congestion Policy

 TCP's general policy for handling congestion is based on three phases:


1. slow start,
2. congestion avoidance, and
3. congestion detection.
 In the slow-start phase, the sender starts with a very slow rate of
transmission, but increases the rate rapidly to reach a threshold.
 When the threshold is reached, the data rate is reduced to avoid congestion.
 Finally if congestion is detected, the sender goes back to the slow-start or
congestion avoidance phase based on how the congestion is detected.
Congestion Control in TCP
Slow Start: Exponential Increase
• This algorithm is based on the idea that the size of the congestion window
(cwnd) starts with one maximum segment size (MSS).
• The size of the window
increases one MSS each
time an acknowledgment
is received. As the name
implies, the window starts
slowly, but grows
exponentially. To show the
idea, let us look at Figure.
Assumption:
1. rwnd is much higher than
cwnd, so that the sender
window size always equals
cwnd.
2. Each segment is
acknowledged
individually.
Slow Start
• The sender starts with cwnd =1 MSS. This means that the sender can send
only one segment.
• After receipt of the acknowledgment for segment 1, the size of the
congestion window is increased by 1, which means that cwnd is now 2.
• Now two more segments can be sent. When each acknowledgment is
received, the size of the window is increased by 1 MSS. When all seven
segments are acknowledged, cwnd = 8.

 We need to mention that if there is delayed ACKs, the increase in the size of
the window is less than power of 2.

 In the slow-start algorithm, the size of the congestion window increases


exponentially until it reaches a threshold.
 Slow start cannot continue indefinitely. There must be a threshold to
stop this phase. The sender keeps track of a variable named ssthresh
(slow-start threshold).
 When the size of window in bytes reaches this threshold, slow start
stops and the next phase starts.
Congestion Avoidance
Congestion Avoidance: Additive Increase
 If we start with the slow-start algorithm, the size of the congestion window
increases exponentially.
 To avoid congestion before it happens, one must slow down this exponential
growth.
 TCP defines another algorithm called congestion avoidance, which
undergoes an additive increase instead of an exponential one.
 When the size of the congestion window reaches the slow-start threshold,
the slow-start phase stops and the additive phase begins.
 In this algorithm, each time the whole window of segments is acknowledged
(one round), the size of the congestion window is increased by 1.
 Figure shows the idea.
Congestion Avoidance
Note:
• In the congestion
avoidance algorithm,
the size of the
congestion window
increases additively
until congestion is
detected.

Figure: Congestion Avoidance: Additive Increase


Congestion Detection
Congestion Detection: Multiplicative Decrease
 If congestion occurs, the congestion window size must be decreased.
 The only way the sender can guess that congestion has occurred is by the
need to retransmit a segment.
 However, retransmission can occur in one of two cases:
1. When a timer times out
 If a time-out occurs, there is a stronger possibility of congestion; a
segment has probably been dropped in the network, and there is
no news about the sent segments.
 In this case TCP reacts strongly:
a) It sets the value of the threshold to one-half of the current
window size.
b) It sets cwnd to the size of one segment.
c) It starts the slow-start phase again.
Congestion Detection
2. When three ACKs are received.
 If three ACKs are received, there is a weaker possibility of
congestion; a segment may have been dropped, but some segments
after that may have arrived safely since three ACKs are received.
This is called fast transmission and fast recovery.
 In this case, TCP has a weaker reaction:
a) It sets the value of the threshold to one-half of the current
window size.
b) It sets cwnd to the value of the threshold (some implementations
add three segment sizes to the threshold).
c) It starts the congestion avoidance phase.

 In both cases, the size of the threshold is dropped to one-half, a


multiplicative decrease.
Note: An implementation reacts to congestion detection in one of the following
ways:
 If detection is by time-out, a new slow start phase starts.
 If detection is by three ACKs, a new congestion avoidance phase starts.
TCP Congestion Policy Summary
 In Figure, we summarize the congestion policy of TCP and the relationships
between the three phases.
Congestion Example

 We assume that the maximum window size is 32 segments.


 The threshold is set to 16 segments (one-half of the maximum window size).
 In the slow-start phase the window size starts from 1 and grows exponentially
until it reaches the threshold.
Congestion Example
 After it reaches the threshold, the congestion avoidance (additive increase)
procedure allows the window size to increase linearly until a timeout occurs
or the maximum window size is reached.
 In Figure, the time-out occurs when the window size is 20.
 At this moment, the multiplicative decrease procedure takes over and
reduces the threshold to one-half of the previous window size.
 The previous window size was 20 when the time-out happened so the new
threshold is now 10.
 TCP moves to slow start again and starts with a window size of 1, and TCP
moves to additive increase when the new threshold is reached.
 When the window size is 12, a three-ACKs event happens.
 The multiplicative decrease procedure takes over again.
 The threshold is set to 6 and TCP goes to the additive increase phase this
time.
 It remains in this phase until another time-out or another three ACKs
happen.

You might also like