0% found this document useful (0 votes)
38 views

31 TCP Congestion

This document discusses congestion in computer networks. It begins by defining congestion as occurring when the offered network load is greater than the system's capacity, causing packets to be queued or dropped. This can lead to decreased performance or even "congestion collapse." TCP uses congestion control where senders dynamically adjust their sending rates based on inferred congestion levels to avoid overloading the network. The document explores how senders detect congestion through increased delay or packet loss, and how they adapt their sending rates in response.

Uploaded by

loius bacchan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

31 TCP Congestion

This document discusses congestion in computer networks. It begins by defining congestion as occurring when the offered network load is greater than the system's capacity, causing packets to be queued or dropped. This can lead to decreased performance or even "congestion collapse." TCP uses congestion control where senders dynamically adjust their sending rates based on inferred congestion levels to avoid overloading the network. The document explores how senders detect congestion through increased delay or packet loss, and how they adapt their sending rates in response.

Uploaded by

loius bacchan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

What

is Congestion?
What gives rise to congestion?

Computer Networks Resource contention: offered load is
greater than system capacity
•  too much data for the network to handle
•  how is it different from flow control?

Lectures 31:
TCP Congestion Control

Why is Congestion Bad? Consequences of Congestion


Causes of congestion:
•  packets arrive faster than a router can forward them If queueing delay > RTO, sender retransmits packets,
•  routers queue packets that they cannot serve immediately adding to congestion
Why is congestion bad? Dropped packets also lead to more retransmissions
•  if queue overflows, packets are dropped
•  queued packets experience delay If unchecked, could result in congestion collapse
• increase in load results in a decrease in useful work done
A packet transmitted (delayed)
10 Mbps
10 Mbps
When a packet is dropped, “upstream” capacity
already spent on the packet was wasted
B 10 Mbps
packets queued (delay)
free buffer: arriving packets
dropped (lost) if buffer overflows
Approaches to Congestion Dealing with Congestion
Free for all Dynamic adjustment (TCP)
• every sender infers the level of congestion
• many dropped (and retransmitted) packets
• each adapts its sending rate “for the greater good”
• can cause congestion collapse

• the long suffering wins
What is “the greater good” (performance objective)?
Paid-for service • maximizing goodput, even if some users suffer more?
• fairness? (what’s fair?)
• pre-arrange bandwidth allocations

• requires negotiation before sending packets
• requires a pricing and payment model Constraints:
• don’t drop packets of the high-bidders •  decentralized control
• only those who can pay get good service •  unlike routing, no local reaction at routers
(beyond buffering and dropping)
•  long feedback time
•  dynamic network condition: connections come and go

What is the Performance Objective? Sender Behavior


System capacity: load vs. throughput: How does sender detect congestion?
•  congestion avoidance: operate system at “knee” capacity • explicit feedback from the network?
•  congestion control: drive system to near “cliff” capacity • implicit feedback: inferred from network performance?

To avoid or prevent congestion, How should the sender adapt?


sender must know system capacity congestion • explicit sending rate computed by the network?
and operate below it collapse
• sender coordinates with receiver?
• sender reacts locally?
How do senders discover system
increase in load ?
capacity and control congestion? that results in a
How fast should new
decrease in
•  detect congestion
•  slow down transmission
useful work
done, increase TCP senders send?
in response
time
What does the sender see?
What can the sender change?
Jain et al.
How Routers Handle Packets How it Looks to the Sender
Congestion happens at router links Packet delay
Simple resource scheduling: FIFO queue and drop-tail • packet experiences high delay

Queue scheduling: manages access to bandwidth Packet loss


• first in first out: packets transmitted in the order they arrive • packet gets dropped along the way
How does TCP sender learn of these?
• delay:

•  round-trip time estimate (RTT)
• loss
Drop policy: manages access to buffer space •  retransmission timeout (RTO)
• drop tail: if queue is full, drop the incoming packet •  duplicate acknowledgments Jain et al.

How do RTT and RTO translate to system capacity?


• how to detect “knee” capacity?
• how to know if system has “gone off the cliff”?
[Rexford] [Rexford]

What can Sender Do? Discovering System Capacity


Upon detecting congestion (packet loss) What TCP sender does:
• decrease sending rate •  probe for point right before cliff (“pipe size”)
•  slow down transmission on detecting cliff (congestion)
But, what if congestion abated? •  fast probing initially, up to a threshold (“slow start”)
• suppose some connections ended transmission and •  slower probing after threshold is reached (“linear increase”)
• there is more bandwidth available
• would be a shame to stay at a low sending rate Why not start by sending
a large amount of data
packet
TCP Tahoe
Upon not detecting congestion dropped

and slow down only

congestion window (cwnd)


• increase sending rate, a little at a time
• and see if packets are successfully delivered upon congestion?
Both good and bad
• pro: obviate the need for explicit feedback from network
• con: under-shooting and over-shooting cliff capacity
[Rexford]
Self-Clocking TCP TCP Congestion Control
TCP uses cumulative ACK for flow control and Sender maintains a congestion window (cwnd)
retransmission and congestion control • to account for the maximum number of bytes in transit
• i.e., number of bytes still awaiting acknowledgments
TCP follows a so-called “Law of Packet Conservation”:
Do not inject a new packet into the network until a Sender’s send window (wnd) is
resident departs (ACK received) wnd = MIN(rwnd, floor(cwnd))
•  rwnd: receiver’s advertised window
Since packet transmission is timed by receipt of ACK,
•  initially set cwnd to 1 MSS, never drop below 1 MSS
TCP is said to be self-clocking
•  increase cwnd if there’s no congestion (by how much?)
•  exponential increase up to ssthresh (initially 64 KB)
receiver
•  linear increase afterwards

•  on congestion, decrease cwnd (by how much?)


receiver • always struggling to find the right transmission rate,
just to the left of cliff
[Stevens]

TCP Slow-Start Increasing cwnd


When connection begins, increase rate Probing the “pipe-size” (system capacity) in two phases:
exponentially until first loss event:
Host A Host B 1.  slow-start: exponential increase
•  double cwnd every RTT (or: increased by

1 for every returned ACK) one segmen


while (cwnd <= ssthresh) {
t
cwnd += 1
RTT

  really, fast start, but from a low base, vs. starting


with a whole receiver window’s worth of data as } for every returned ACK
two segmen
TCP originally did, without congestion control ts
OR: cwnd *= 2 for every cwnd-full of ACKs

four segm
2.  congestion avoidance: linear increase
ents

while (cwnd > ssthresh) {


Jacobson & Karels
cwnd += 1/floor(cwnd)
time
} for every returned ACK

OR: cwnd += 1 for every cwnd-full of ACKs

Jacobson and Karels


TCP Slow Start Example Dealing with Congestion
Once congestion is detected,
•  how should the sender reduce its transmission rate?

•  how does the sender recover from congestion?



Goals of congestion control:
1.  Efficiency: resources are fully utilized TCP connection 1

2.  Fairness: if k TCP connections share


the same bottleneck link of bottleneck
bandwidth R, each connection router
capacity R
should get an average rate of R/k
TCP connection 2

pipe full
Stevens

Goals of Congestion Control Adapting to Congestion


3.  Responsiveness: fast convergence,
D. -M. Chiu, R. Jain / Congestion Avoidance in Computer Networks By how much should cwnd (w) be changed?
quick adaptation to current
ass ought to have the equal share of the bot- Limiting ourselves to only linear adjustments:
capacity ~ e _C
_ ~ Responsiveness
eneck. Thus, a system in which x i ( t ) = x j ( t ) V i, j
• increase when there’s no congestion: w’ = biw +ai
haring the same bottleneck is operating fairly. If

ndex is [6]:
4.  Smoothness: little oscillation
l users do not get exactly equal allocations, the
larger change-step increases
ystem is less fair and •  we need an index or a
unction that quantifies responsiveness but decreases
the fairness. One such
smoothness
Goal ~ oothness
• decrease upon congestion: w’ = bdw +ad

Alternatives for the coefficients:


1.  Additive increase, additive decrease:
Total
5.  Distributed control: load on
ai > 0, ad < 0, bi = bd = 1
F(x)- (Ex')2
2.  Additive increase, multiplicative decrease:
airness:
n(r ;ino (explicit) coordination
) " the

between nodes
network ai > 0, bi = 1, ad = 0, 0 < bd < 1

his index has the following properties: 3.  Multiplicative increase, additive decrease:
(a) The fairness is bounded between 0 and 1 (or Time Chiu & Jain
Fig. 3. Responsiveness and smoothness. ai = 0, bi > 1, ad < 0, bd = 1
Guideline for congestion control (as in routing):
0% and 100%). A totally fair allocation (with
all xi's equal) has a fairness of 1 and a 4.  Multiplicative increase, multiplicative decrease:
be skeptical of good news, react fast to bad news
totally unfair allocation (with all resources
(4) Convergence: Finally we require the control bi > 1, 0 < bd < 1, ai = ad = 0
given to only one user) has a fairness of 1 / n scheme to converge. Convergence is generally
which is 0 in the limit as n tends to oo. measured by the speed with which (or time taken
(b) The fairness is independent of scale, i.e., till) the system approaches the goal state from any
unit of measurement does not matter. starting state. However, due to the binary nature
(c) The fairness is a continuous function. Any of the feedback, the system does not generally
slight change in allocation shows up in the converge to a single steady state. Rather, the sys-
location {Xl(t), x 2 ( t ) } Can be represented as a location {Xl(t), x 2 ( t ) } Can be represented as a
point (x 1, x2) in a 2-dimensional space. In this Notice that multiplying both allocations by a fac- point (x 1, x2) in a 2-dimensional space. In this Notice
figure, the horizontal axis represents allocations to tor b does not change the fairness. That is, figure, the horizontal axis represents allocations to tor b
user 1, and the vertical axis represents allocations (bx 1, bx2) has the same fairness as (x 1, x2) for all user 1, and the vertical axis represents allocations (bx 1, bx
to user 2. All allocations for which x I + x 2 = Xgoal values of b. Thus, all points on the line joining a to user 2. All allocations for which x I + x 2 = Xgoal values o
are efficient allocations. This corresponds to the point to origin have the same fairness. We, there- are efficient allocations. This corresponds to the point to

Resource Allocation Additive/Multiplicative Factors


straight line marked "efficiency line". All al- fore, call a line passing through the origin a straight line marked "efficiency line". All al- fore, ca
locations for which x 1 = x 2 are fair allocations. "equi-fairness" line. The fairness decreases as the locations for which x 1 = x 2 are fair allocations. "equi-fa
This corresponds to the straight line marked "fair- slope of the line either increases above or de- This corresponds to the straight line marked "fair- slope o
ness line". The two lines intersect at the point creases below the fairness line. ness line". The two lines intersect at the point creases
( X goal/2, Xgo~/2 ) that is the optimal point. The Figure 5 shows a complete trajectory of the ( X goal/2, Xgo~/2 ) that is the optimal point. The Figur
View resource allocation as a trajectory through an goal of control schemes should be to bring the Additive factor: adding the same amount to both
two-user system starting from point x 0 using an goal of control schemes should be to bring the two-use

n-dimensional vector space, one dimension per user users’ allocation moves an allocation along a 45º line
additive increase/multiplicative decrease control
policy. The point x 0 is below the efficiency line
additive
policy.

and so both users are asked to increase. They do
so additively by moving along at an angle of 45 o.
and so
so addit
l Equi- l Equi-
A 2-user allocation trajectory: Fairness Fairness Multiplicative factor:
This brings them to x~ which happens to be above
Fairness Fairness
This bri
the efficiency line. The users are asked to decrease the effic
•  x1, x2: the two users’ allocations UserR ~ L m ~ Line multiplying both users’
and they do so multiplicatively. This correspondsUserR ~ L m ~ Line and the
to moving towards the origin on the line joining to movi
•  Efficiency Line: x1 + x2 = xi = R 2's ~ ~ // allocation by the same factor 2's
x 1 and the origin. This brings them to point x 2,
~ ~ //
x 1 and
•  below this line, system is under-loaded Alloc- ] ~ //Overload
moves an allocation on a line
which happens to be below the efficiency line andAlloc-
the cycle repeats. Notice that x 2 has higher fair-
] ~ //Overload which h
the cycl
•  above, overloaded
through the origin (the
ness than x 0. Thus, with every cycle, the fairness ness tha
•  Fairness Line: x1 = x2 increases slightly, and eventually, the system con- increase
“equi-fairness,” or rather,
verges to the optimal state in the sense that it verges t
•  Optimal Point: efficient and fair keeps oscillating around the goal. keeps os
“equi-unfairness” line)
Similar trajectories can be drawn for other con- Simil
•  Goal of congestion control: trol policies. Although not all control policies con- trol poli
User l's Allocation xt R •  the slope of this line, not any
verge. For example, Fig. 6 shows the trajectory for User l's Allocation xt R verge. F
to operate at optimal point Fig. 4. Vectorrepresentationof a two-usercase. the additive increase/additive decrease control Fig. 4. Vectorrepresentationof a two-usercase. the add
Chiu & Jain
position on it, determines fairness Chiu & Jain

D.-M. Chiu, R. Jain / Congestion Avoidance in Computer Networks 7

Fairness
Line
xl /"
/
/
//

It can be shown that only AIMD


AIMD TCP Congestion Recovery
User
2's

takes system near optimal point


/,,;):,;
Alloc-
ation
x2

Once congestion is detected,


] l ll /

Additive Increase, Additive Increase,


/ I/1
I l ll/
I Ill
I II1~

, ~5~' \ E f f i c i e n c y Line
•  by how much should sender decrease cwnd?
I i//

/¢, Ii//~

Multiplicative Decrease: Additive Decrease:


•  how does sender recover from congestion?
ID-

User l's Allocation xl

system converges to an system converges to


Fig. 5. AdditiveIncrease/MultiplicativeDecreaseconvergesto the optimalpoint.
•  which packet(s) to retransmit?
equilibrium near the
policy starting from the position x 0. The system
efficiency, but not to
converge to efficiency, but not to fairness. The
keeps moving back and forth along a 45 ° line conditions for convergence to efficiency and fair- •  how to increase cwnd again?
through x 0. With such a policy, the system can ness are derived algebraically in the next section.

Optimal Point fairness


D.-M. Chiu, R. Jain / Congestion Avoidance in Computer Networks 7
]
] The operating
point keeps First, reduce the exponential increase
R R
/ oscillating along , .
Fairness
Line
\
/
/ this line I~awness
Line threshold ssthresh = cwnd/2
xl /"
\// ,,"
/
/
packet
dropped
//
TCP Tahoe

congestion window (cwnd)


User User
~
fx0
N ~ l//j
TCP Tahoe:
•  retransmit using Go-Back-N
2's 2's
Alloc- Alloc-
ation
x2
]
/,,;):,;
l ll /
ation
x2 •  reset cwnd=1
/ I/1
/ j/j ~
•  restart slow-start
I l ll/
I Ill
I II1~

, ~5~'
I i// \ E f f i c i e n c y Line / ~fficteney Line
/¢,
Ii//~
f
ID-

User l's Allocation xl


R Chiu & Jain
Fig. 5. AdditiveIncrease/MultiplicativeDecreaseconvergesto the optimalpoint.
User l's Allocation x l
Fig. 6. AdditiveIncrease/AdditiveDecreasedoesnot converge. R Chiu & Jain

policy starting from the position x 0. The system converge to efficiency, but not to fairness. The
keeps moving back and forth along a 45 ° line conditions for convergence to efficiency and fair-
through x 0. With such a policy, the system can ness are derived algebraically in the next section.
Fast Retransmit Fast Retransmit Example
Motivation: waiting for RTO is too slow
TCP Tahoe also does fast retransmit:
•  with cumulative ACK, receipt of packets following a lost
packet causes duplicate ACKs to be returned
•  interpret 3 duplicate ACKs as an implicit NAK
•  retransmit upon receiving 3 dupACKs, i.e., on receipt of the
rwnd
4th ACK with the same seq#, retransmit segment
•  why 3 dupACKs? why not 2 or 4?

With fast retransmit, TCP can retransmit after 1 RTT sender’s


sent segments
retransmit on
instead of waiting for RTO
3 dupACKs
wnd 4th dupACK
ACKed seq#

time (secs)
[Hoe]

TCP Tahoe Recovers Slowly TCP Reno and Fast Recovery


cwnd re-opening and retransmission of lost packets
regulated by returning ACKs TCP Reno does fast recovery:
•  duplicate ACK doesn’t grow cwnd, so TCP Tahoe must wait •  current value of cwnd is the estimated system
at least 1 RTT for fast retransmitted packet to cause a non (pipe) capacity
duplicated ACK to be returned •  after congestion is detected, want to continue
•  if RTT is large, Tahoe re-grows transmitting at half the estimated capacity
cwnd very slowly How?
•  each returning ACK signals that an outstanding
packet has left the network
•  don’t send any new packet until half of the
expected number of ACKs have returned

1 RTT

[Hoe]
Fast Recovery
1.  on congestion, retransmit lost
Summary: TCP Congestion Control
segment, set ssthresh = cwnd/2
2.  remember highest seq# sent, •  When cwnd is below ssthresh, sender in slow-
snd_high; and remember snd_high start phase, window grows exponentially
current cwnd, let’s call it pipe
3.  decrease cwnd by half •  When cwnd is above ssthresh, sender is in
4.  increment cwnd for every congestion-avoidance phase, window grows linearly
returning dupACK, incl. the 3
used for fast retransmit •  When a 3 dupACKs received, ssthresh set to
5.  send new packets (above
pipe
cwnd/2 and cwnd set to new ssthresh
snd_high) only when
cwnd > pipe •  If more dupACKs return, do fast recovery
6.  exit fast-recovery when a sshthresh+1

non-dup ACK is received


cwnd/2
•  Else when RTO occurs, set ssthresh to cwnd/2
7.  set cwnd = ssthresh + 1 and set cwnd to 1 MSS
and resume linear increase cwnd: number of bytes unACKed
[Hoe]

TCP Congestion Control Examples Factors in TCP Performance


TCP keeps track of outstanding bytes by two variables:
1. snd_una: lowest unACKed seq#,
i.e., snd_una records the seq# associated with the last ACK
2. snd_next: seq# to be sent next •  RTT estimate
•  RTO computation
Amount of outstanding bytes:
pipe = snd_next - snd_una •  sender’s sliding window (wnd)
•  receiver’s window (rwnd)
Scenario:
•  congestion window (cwnd)
•  1 byte/pkt
•  receiver R takes 1 transmit time to return an ACK •  slow-start threshold (ssthresh)
•  sender S sends out the next packet immediately upon •  fast retransmit
receiving an ACK •  fast recovery
•  rwnd = ∞
•  cwnd = 21, in linear increase mode
•  pipe = 21
TCP Variants Summary of TCP Variants
Original TCP: TCP New Reno:
•  loss recovery depends on RTO
•  implements fast retransmit phase whereby a partial ACK, a
non-dupACK that is < snd_high (seq# sent before
TCP Tahoe: detection of loss), doesn’t take TCP out of fast recovery,
•  slow-start and linear increase instead retransmits the next lost segment
•  interprets 3 dupACKs as loss signal, •  only non-dupACK that is ≥ snd_high takes TCP out of fast
but restart sslow-start after fast retransmit recovery: resets cwnd to ssthresh+1 and resumes linear
increase
TCP Reno:
•  fast recovery, i.e., consumes half returning dupACKs
before transmitting one new packet for each
additional returning dupACKs
•  on receiving a non-dupACK, resumes linear-increase
from half of old cwnd value

You might also like