cc2 -- all slides
cc2 -- all slides
In The Internet
Part 2:
Implementation
Contents
6. TCP Reno
7. TCP Cubic
8. ECN and AQM, DC-TCP
9. New Directions
6. Congestion Control in the Internet was initially only in TCP
Why?
Easy to add end-to-end congestion control to TCP, as TCP already maintains an end-to-end connection
- using techniques like: additive increase/multiplicative decrease (AIMD), and slow start
- leveraging implicit or explicit feedback from network (according to the Decbit principle)
This leads to an
exponential increase of cwnd
cwnd+=1/cwnd,
slightly less than additive increase (< 1 MSS/RTT)
- other implementations also exist:
- e.g. wait until cwnd bytes are ack’ed and then increment cwnd by 1 MSS
- cwnd = 1MSS (if timeout) or something else (if fast retransmit) [see Fast Recovery]
Example of congestion-window evolution without Fast recovery
slow start
Recall:
• there is a slow start phase initially and after every packet loss detected by timeout
ssthresh=cwnd = 800 1
seq=201:300 2
During congestion avoidance:
seq=301:350 3 Ack = 201,win=1’000 MSS2
seq=351:400 4 cwnd ← cwnd +
cwnd
seq=401:500 5 Ack = 201,win=1’000
ssthresh=cwnd=813 6 Ack = 201,win=1’000
seq=501:600 7 Ack = 201,win=1’000
8
seq=601:700 9 Ack = 201,win=1’000
10
seq=701:800 11 Ack = 201,win=1’000
ssthresh=407, cwnd=707 12
Ack = 201,win=1’000
seq=201:300 13
seq=801:900 14
ssthresh=407, cwnd=807 15
ssthresh=407, cwnd=907 16 Ack = 901,win=1’000
seq=901:1000 17
ssthresh=407, cwnd=1007 18
ssthresh=407, cwnd=407 19
20
At time 1, the sender is in “congestion avoidance” mode. The congestion window increases with every received non-
duplicate ack (as at time 6). The target window (ssthresh) is equal to the congestion window.
At time 12, its loss is detected by fast retransmit, i.e. reception of 3 duplicate acks. The sender goes into “fast recovery”
mode. The target window is set to half the value of the congestion window; the congestion window is set to the target
window plus 3 packets (one for each duplicate ack received).
At time 13 the source retransmits the lost packet. At time 14 it transmits a fresh packet. This is possible because the
window is large enough. The window size, which is the minimum of the congestion window and the advertised window,
is equal to 707. Since the last acked byte is 201, it is possible to send up to 907.
At times 15, 16 and 18, the congestion window is increased by 1 MSS, i.e. 100 bytes, by application of the fast recovery
algorithm. At time 15, this allows to send one fresh packet, which occurs at time 17.
At time 19 the lost packet is acked, the source exits the fast recovery mode and enters congestion avoidance. The
congestion window is set to the target window.
How many new segments of size 100 bytes
can the source send at time 20 ?
ssthresh=cwnd = 800 1
seq=201:300 2 Go to web.speakup.info
seq=301:350 3 Ack = 201,win=1’000 or
download speakup app
seq=351:400 4
Join room
seq=401:500 5 Ack = 201,win=1’000 46045
ssthresh=cwnd=813 6 Ack = 201,win=1’000
seq=501:600 7 Ack = 201,win=1’000
8 A. 1
seq=601:700 9 Ack = 201,win=1’000
10 B. 2
seq=701:800 11 Ack = 201,win=1’000
ssthresh=407, cwnd=707 12
C. 3
Ack = 201,win=1’000
seq=201:300 13 D. 4
seq=801:900 14
ssthresh=407, cwnd=807 15 E. ≥ 5
ssthresh=407, cwnd=907 16 Ack = 901,win=1’000
seq=901:1000 17 F. 0
ssthresh=407, cwnd=1007 18 G. I don’t know
ssthresh=407, cwnd=407 19
20
Solution
Answer C
The congestion window is 407, the advertised window is 1000, and the last ack
received is 901.
The source can send bytes 901 to 1308, the segment 901:1001 was already sent,
i.e. the source can send 3 new segments of 100 bytes each.
TCP Reno — recap
A. The TCP source knows it is a loss due to channel errors and not congestion,
therefore does not reduce the window
B. The TCP source thinks it is a congestion loss and reduces its window
C. It depends if the MAC layer uses retransmissions
D. I don’t know
Go to web.speakup.info
or
download speakup app
Join room
46045
Solution
Answer B: the TCP source does not know the cause of a loss.
Side-effect:
A flow that experiences accidental losses on its wireless access link may never manage to get
its fair share on another bottleneck link down its path, because it will be constantly reducing its
sending rate “thinking that it experiences congestion”.
Solutions:
Explicit Congestion Notification from the network [see later]
Dynamic (more sophisticated) coding at the physical layer to avoid errors on the wireless link
Fairness of TCP Reno
For long lived flows, the rates obtained with TCP Reno are as if they were distributed according to utility
2 xiτi
fairness, with utility of flow given by Ui(xi) = arctan
τi 2
with xi = rate (in MSSs) = W/τi, τi = RTT (see “Rate adaptation, Congestion Control and Fairness: A Tutorial")
For flows that have same RTT, the fairness of TCP is between max-min and proportional fairness, closer to
proportional fairness:
Reno
≈maxmin
AIMD
proportional fairness
rescaled utility
functions;
RTT = 100 ms
−5
maxmin approx. is ( ) = 1 −
𝑈
𝑥
𝑥
𝑖
TCP Reno and RTT
TCP Reno tends to distribute rate so as to maximize utility of source given by
2 xτ
Ui(xi) = arctan i i
τi 2
The utility depends on the roundtrip time τi;
The utility is a
decreasing function
of RTT
S1
router destination
10 Mb/s, 20 ms 1 Mb/s 10 ms
10 Mb/s, 60 ms Go to web.speakup.info
or
S2 download speakup app
Join room
46045
𝟏
𝟐
𝑆
𝑆
𝑺
𝑺
Solution
For long lived flows, the rates obtained
with TCP are as if they were distributed
according to utility fairness, with utility of
2
flow given by ( ) = arctan
√2
1 has a smaller RTT than 2
The utility is less when RTT is large;
therefore, TCP tries less hard to give a
high rate to sources with large RTT.
- a (practical) explanation: additive increase is one packet per RTT (instead of one packet
per constant time interval); so a flow with a smaller RTT can “open” the window faster.
A flow that uses many hops obtains less rate because of two combined factors:
1. If this flow goes over many congested links, it uses more resources. The mechanic of
TCP Reno that is close to proportional fairness leads to this source having less rate,
which is desirable in view of the theory of fairness.
2. If this flow has simply a larger RTT, then things are different. The mechanics of
additive increase leads to this source having less rate,
which is an undesired bias in the design of TCP Reno.
TCP Reno
Loss - Throughput Formula
Consider a large TCP flow size (many bytes to transmit).
Assume we observe that, in average, a fraction q of packets is lost (or marked with ECN).
1.22
The throughput should be close to = .
Formula assumes:
transmission time is negligible compared to RTT,
losses are rare and occur periodically,
time spent in Slow Start and Fast Recovery is negligible.
3 S1
A. 1= 2
7 Destination
B. 1= 2 router
7 10 Mb/s, 20 ms 1 Mb/s 10 ms
C. 2 1=
3
10 10 Mb/s, 60 ms
D. 1 = 2
3
E. None of the above S2
F. I don’t know 1.22
=
Go to web.speakup.info
or
download speakup app
Join room
𝑅
𝑇
𝑇
𝑞
46045
𝜃
𝜃
𝜃
𝜃
𝜃
𝜃
𝜃
𝑀
𝑆
𝑆
𝜃
𝜃
Solution
ACK numbers S1
S2
time
Answer C.
𝜏
𝜏
𝜃
𝜃
𝜃
𝜃
TCP Reno — shortcomings
• RTT bias – not nice for users far away from the source
• Periodic losses must occur, not nice for applications (e.g video streaming).
• TCP controls the window, not the rate. Large bursts typically occur when packets are
released by host following e.g. a window increase – not nice for queues in the internet,
makes non smooth behavior.
• Self inflicted delay: if network buffers (in routers and switches) are large, TCP first fills buffers
before adapting the rate. The RTT is increased unnecessarily. Buffers are constantly full,
which reduces their usefulness (bufferbloat syndrome) and increases delay for all users.
Interactive, short flows experience large latency when buffers are large and full.
Congestion control in UDP Applications
UDP applications that can adapt their rate have to implement congestion control.
()
Say congestion avoidance is entered Additive Increase (≈Reno)
at time 0 = 0 and let with RTT = 0.1 s
= value of cwnd, when loss is detected.
Let ( ) = + 0.4( − )3
with such that (0) = 0.7 . Cubic
Then the window increases like
( ) until a loss occurs again.
t
𝑚
𝑎
𝑥
𝑚
𝑎
𝑥
𝑚
𝑎
𝑥
𝑚
𝑎
𝑥
𝑊
𝑡
𝑊
𝑡
𝐾
𝑊
𝑡
𝑊
𝑡
𝐾𝑊
𝑊
𝑡
𝑊
𝑊
𝐾
How does this compare to Reno?
Cubic increases window in concave way until reaches , then increases in a convex way
( ) is independent from RTT, but
- it opens faster than Reno when RTT is large (long networks),
- but may be slower when RTT is small (non-LFNs)
W(t)
Additive Increase (≈Reno)
Cubic
Cubic with RTT = 0.01 s
t
𝑚
𝑎
𝑥
𝑊
𝑊
𝑡
Cubic’s Window Increase
Cubic is always at least as fast as a hypothetical Reno (i.e. AIMD) with additive increase term
MSS per RTT (instead of 1) and multiplicative decrease = 0.7.
Formally:
WCUBIC(t) = max {W(t), WAIMD(t)},
where
( )
1.054 1.22
≈ max 0.25 0.75
, in MSS per second.
Mb/s
RTT = 12.5 ms
Reno
So:
• Cubic’s formula is same as Reno Cubic @ RTT = 100 ms
for small RTTs and small BW-delay products
• but a TCP Cubic connection gets more
throughput than TCP Reno when bit-rate and
RTT are large
RTT = 800 ms
q
𝑅
𝑇
𝑇
𝑞
𝑅
𝑇
𝑇
𝑞
Other details: computation of uses a more complex mechanism called “fast convergence” - see Latest IETF Cubic
𝜃
RFC / Internet Draft or http://elixir.free-electrons.com/linux/latest/source/net/ipv4/tcp_cubic.c
𝑚
𝑎
𝑥
𝑊
8. Tackling the Bufferbloat Syndrome with ECN and AQM
Using loss as congestion feedback has a major drawback = self-inflicted delay:
increased latencies and buffers are not well utilized due to bufferbloat syndrome.
Loss Based
Congestion B
Optimal Control
Delivery Operating Point Operating Point round-trip
rate time
A
RTTmin
bottleneck
link capacity
The previous figure illustrates that if the amount of inflight data (i.e. the window size) is just
large enough to fill the available bottleneck link capacity, the bottleneck link is fully utilized and
the queuing delay is zero or close to zero. This is the optimal operating point (A), because the
bottleneck link is already fully utilized at this point. If the amount of inflight data is increased
any further, the bottleneck buffer gets filled with the excess data. The delivery rate, however,
does not increase anymore. The data is not delivered any faster since the bottleneck does not
serve packets any faster and the throughput stays the same for the sender: the amount of
inflight data is larger, but the round-trip time increases by the corresponding amount. Excess
data in the buffer is useless for throughput gain and a queuing delay is caused that rises with
an increasing amount of inflight data. Loss-based congestion controls shift the point of
operation to (B) which implies an unnecessary high end-to-end delay, leading to “bufferbloat”
in case the buffer sizes are large.
ECN - Explicit Congestion Notification…
…aims at avoiding bufferbloat
How?
• IP router experiencing congestion marks packet instead of dropping
• TCP destination echoes back the mark to the source
• TCP source interprets an echoed marked packet as if there was a loss detected by fast retransmit
Example
2
window size
Go to web.speakup.info
or
download speakup app
1 ECE received Join room
ECE received 46045
1
Recall: Slow start’s multiplicative increase results in an exponential growth of the cwnd.
So, no slow start phase is shown in this figure.
𝑡
ECN flags in IP and TCP headers
2 bits in IP header, 4 possible codewords:
00 = non ECN Capable (non ECT)
01 or 10 = ECN capable ECT(0) and ECT(1)
historically used at random; today used to
differentiate congestion control (TCP Cubic vs DCTCP)
11 = used by routers to signal that congestion is experienced (CE)
If congested, router marks ECT(0) or ECT(1) packets; but discards non ECT packets
q 1
(marking prob)
max-p
See the difference from passive queue management = drop a packet only when queue is full = “Tail Drop”
But…Active Queue Management does not require ECN
AQM can also be applied even if ECN is not supported
e.g. with RED, a packet is dropped with probability computed by the RED curve
- packet may be discarded even if there is some space available!
In the context of packet dropping (instead of ECN), RED can be replaced by the more recent variant called CoDel (RFC 8289).
𝑞
In a network where all flows use TCP with ECN and all routers
support ECN, we expect that …
Go to web.speakup.info
or
download speakup app
Join room
46045
Solution
Answer C
We expect that routers (almost) do not drop packets due to congestion if all
TCP sources use ECN
• lots of bandwidth
- many short flows with low latency requirements (user queries, mapReduce communication)
- some jumbo flows with huge volume (backup, synchronizations) may use an entire link
𝜇
Given what you have learnt so far,
what would you choose
for TCP flows inside a data center ?
Standard operation of ECN (e.g. with Reno or Cubic) still has drawbacks for jumbo
flows in data center settings:
multiplicative decrease by 50%
or 30% is still abrupt ⇒
throughput inefficiency
Data Center TCP
Why ? Improve performance for jumbo flows when ECN is used
How ?
Avoid brutal multiplicative decrease of 50% (of Reno) or 30% (of Cubic)
( 2)
• Multiplicative decrease is × = 1−
𝐷
𝐶
𝑇
𝐶
𝑃
𝛽
𝑝
In a data center: two large TCP flows compete for a
bottleneck link; one uses DCTCP, the other uses Cubic/ECN.
Both have same RTT.
A. Both get roughly the same throughput
B. DCTCP gets much more throughput
C. Cubic gets much more throughput
D. I don’t know
Go to web.speakup.info
or
download speakup app
Join room
46045
Solution
Answer B.
If latency is very small, Cubic with ECN has same throughput performance as Reno with ECN, i.e.
same as AIMD with multiplicative decrease = × 0.5 and window increase of 1 packet per RTT
during congestion avoidance.
DCTCP is similar, in particular has same window increase, but with multiplicative decrease =
( 2)
× 1− so the multiplicative decrease is always less.
DCTCP decreases less and increases the same, therefore it is more aggressive.
In other words, DCTCP competes unfairly with other TCPs; this is why it cannot be deployed
outside data centers (or other controlled environments).
Inside data centers, care must be given to separate the DCTCP flows (i.e. the internal flows) from
other flows. This can be done with class-based queuing [see next].
𝑝
9. Beyond Loss/ECN Based Congestion Control
TCP-BBR
Per Class Queuing
Evolution of Buffer Drain Time in the Internet
Buffer Drain Time = buffer capacity / link rate
To keep buffer drain time constant, the product (memory speed × memory size)
should scale faster than link rate, which is technologically not feasible.
• Access network (1 Gb/s): buffer drain time is 10s of seconds = buffer is “large” w.r.t. rate
⇒ Bufferbloat unless ECN is used
But
• In internet core links (100 Gb/s, 1 Tb/s):
buffer drain time decreases, is now fraction of ms, much less than RTT = buffer is “small”
⇒ impossible to react correctly within round trip time
⇒ feedback control may be inadequate
TCP-BBR
Bottleneck Bandwidth and RTT
TCP-BBR published by Google in 2016 [Caldwell et al 2016]
What ? Avoid per packet feedback, target maximum throughput with minimal delay
How ? BBR-TCP source:
1. estimates the bottleneck bandwidth and the min RTT separately
2. controls directly the rate (not the window) using pacing (= implementing a packet spacer)
that tries to keep amount of inflight data close to
bottleneck bandwidth × minRTT (optimal operating point)
over last 10 RTTs; delivery rate = amount of ACKed data per Δ control. In Proceedings of the internet
measurement conference (pp. 137-143).
2) Phases: The BBR algorithm has four different phases: Startup, Drain, Probe
Bandwidth, and Probe RTT. The first phase adapts the exponential Startup behavior
from CUBIC by doubling the sending rate with each round-trip. Once the measured
bandwidth does not increase further, BBR assumes to have reached the bottleneck
bandwidth. Since this observation is delayed by one RTT, a queue was already created
at the bottleneck. BBR tries to Drain it by temporarily reducing the pacing gain.
Afterwards, BBR enters the Probe Bandwidth phase in which it probes for more available
bandwidth. This is performed in eight cycles, each lasting RTprop: First, pacing gain is
set to 1.25, probing for more bandwidth, followed by 0.75 to drain created queues. For
the remaining six cycles BBR sets the pacing gain to 1. BBR continuously samples the
bandwidth and uses the maximum as BtlBw estimator, whereby values are valid for the
timespan of ten RTprop. After not measuring a new RTprop value for ten seconds, BBR
stops probing for bandwidth and enters the Probe RTT phase. During this phase the
bandwidth is reduced to four packets to drain any possible queue and get a real
estimation of the RTT. This phase is kept for 200 ms plus one RTT. If a new minimum
value is measured, RTprop is updated and valid for ten seconds.
Performance of BBRv1
Google and other data center companies report
improvement on throughput (green curve), the latency measurement here is irrelevant and should be ignored
http://blog.cerowrt.org/post/bbrs_basic_beauty/
Performance of BBRv1
But…BBRv1 takes no feedback from Hock, M., Bless, R. and
Zitterbart, M., 2017, October.
network – no reaction to loss or ECN Experimental evaluation of BBR
congestion control. In 2017 IEEE
• [Hock et al, 2017] find that BBRv1’s 25th International Conference
on Network Protocols (ICNP)
estimated bottleneck bandwidth (pp. 1-10). IEEE.
It is implemented in routers with dedicated queues for every class and a scheduler
such as Weighted Round Robin (WRR) or Deficit Round Robin (DRR).
WRR and DRR have one queue per class.
At every round, queues are visited in sequence.
WRR serves packets of class in one round. DRR serves bits of class in one round.
Used in
enterprise or industrial networks to support non-congestion-controlled flows (e.g. real-time flows);
provider networks to separate customers / isolate suspicious flows (network virtualization).
𝑖
𝑖
𝑤
𝑖𝑞
𝑖
Example of Class-Based Queuing
Class 1 for PMUs (power measurement units) is guaranteed a rate of 2.5 Mb/s; it can exceed
this rate by borrowing capacity available from the total 10 Mb/s if class 2 does not need it.
Class 2 for PCs is guaranteed a rate of 7.5 Mb/s; it can exceed this rate by borrowing capacity
available from the total 10 Mb/s if class 1 does not need it.
Suppose PMUs behave properly as expected.
Which rates will PC1 and PC2 achieve, if their RTTs are equal?
A. 5 Mb/s each
B. 4 Mb/s each
C. PC1: 5 Mb/s, PC2: 3 Mb/s Go to web.speakup.info
or
D. I don’t know download speakup app
Join room
46045
Solution
9 Mb/s available
8 Mb/s available 8 Mb/s available S1
7.5 Mb/s guaranteed
7.5 Mb/s guaranteed 7.5 Mb/s guaranteed
10 Mb/s 10 Mb/s 10 Mb/s
class 2
low prio
PC1 PC2
class 2
low prio
PC1 PC2
TCP allocates rates 1 and 2 so as to maximize ( 1) + ( 2) where is the utility function of TCP;
the function is the same for PC1 and PC2 because RTTs are the same.
The constraints are 1 ≤ 9 Mb/s, x1 ≤ 8, 1 + 2 ≤ 8 Mb/s.
Thus TCP solves a utility optimization problem: maximize ( 1) + ( 2) subject to 1 + 2 ≤ 8 Mb/s
By symmetry, 1 = 2 = 4 Mb/s
You can also check max-min fair allocation ( 1 = 2 = 4 Mb/s) and proportionally fair allocation
( 1 = 2 = 4 Mb/s) .
Answer B.
𝑈
𝑈
𝑥
𝑥
𝑈
𝑈
𝑥
𝑥
𝑥
𝑥
𝑥
𝑈𝑈𝑥
𝑥
𝑥
𝑥
𝑥
𝑥
𝑥
𝑥
𝑥
𝑥
The Future of Congestion Control
In the past, most TCP versions have relied on loss or ECN as negative signal. Some versions also
relied on delay only (TCP Vegas) or use delay as well as loss (PCC).
Congestion control today wants to also achieve “per-flow fairness”. But each flow may use a different
congestion control algorithm.
Brown, L., Ananthanarayanan, G.,
Is fairness achieved? Is every flow "TCP friendly”? Katz-Bassett, E., Krishnamurthy, A.,
Ratnasamy, S., Schapira, M. and
Shenker, S., 2020, November. On the
Is the “flow” the right abstraction/fairness-actor? future of congestion control for the
public internet. In Proceedings of the
What are the alternatives? 19th ACM Workshop on Hot Topics in
Networks (pp. 30-37).
Traffic isolation (e.g. with per-class traffic shapers or per-class queuing) is a possible future
alternative; packet dropping/ECN marking becomes a function of the traffic aggregate/class a
packet belongs to.
But does this comply with network neutrality regulations (= ISPs provide no competitive advantage to
specific apps/services, either through pricing or QoS)? How could network neutrality be maintained?
Conclusion
Congestion control is in TCP or in QUIC (a form of congestion-controlled UDP).
Too much buffer is as bad as too little buffer—bufferbloat provokes large latency for interactive flows.
• ECN can avoid this – it replaces loss by an explicit congestion signal; but it is partly deployed in the
Internet; although it is part of Data Center TCP.
• TCP-BBR aims at avoiding this by pacing traffic:
it estimates the available bottleneck bandwidth and the min RTT
and it tries to keep amount of inflight data close to bottleneck bandwidth × minRTT
Per-Class-based queuing can separate flows in enterprise networks or classes of flows in provider networks.