WebRTC Performance
WebRTC Performance
net/publication/323950925
CITATIONS READS
27 2,871
5 authors, including:
Gil Zussman
Columbia University
191 PUBLICATIONS 4,287 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gil Zussman on 09 May 2019.
over-use under-use
under-use
di = (ti − ti−1 ) − (Ti − Ti−1 ) (3)
over-use
This delay shows the relative increase or decrease with
respect to the previous packet. The one-way delay variation Figure 2: Rate controller state changes based on the
is larger than 0 if the inter-arrival time is larger than the signal output of the over-use detector.
inter-departure time. The arrival-time filter estimates the
one-way queuing delay variation mi . The calculation of mi
is based on the measured di and previous state estimate mated Maximum Bandwidth) 1 message in an RTCP report
mi−1 , whose weights are dynamically adjusted by a Kalman (Figure 1).
filter to reduce noise in estimation. For instance, the weight
for the current measurement di is weighed more heavily than 2.2 Sender-side controller
the previous estimate mi−1 when the error variance is low. The sender-side controller is loss-based and computes the
For more details, see [15]. sending rate at the sender, As in Kbps and is shown on the
left side of Figure 1. As is computed every time (tk ) the
2.1.2 Over-use detector
kth RTCP report or an REMB message is received from the
The estimated one-way queuing delay variation (mi ) is receiver. The estimation of As is based on the fraction of
compared to a threshold γ. Over-use is detected, if the lost packets fl (tk ) as follows:
estimate is larger than this threshold. The over-use detector
does not signal this to the rate controller, unless over-use is
detected for a specified period of time. The over-use time is As (tk−1 )(1 − 0.5fl (tk ))
fl (i) > 0.1
currently set to 100ms [10]. Under-use is detected when the As (tk ) = 1.05As (Tk−1 ) fl (tk ) < 0.02 (6)
estimate is smaller than the negative value of this threshold
A (t
s k−1 ) otherwise
and works in a similar manner. A normal signal is triggered
when −γ ≤ mi ≤ γ. If the packet loss is between 2% and 10%, the sending
The value of the threshold has a large impact on the over- rate remains unchanged. If more than 10% of the packets
all performance of the GCC congestion algorithm. A static are reported lost, the rate is multiplicatively decreased. If
threshold γ can easily result in starvation in the presence the packet loss is smaller than 2%, the sending rate is lin-
of concurrent TCP flows, as shown in [11]. Therefore, a early increased. Furthermore, the sending rate can never
dynamic threshold was implemented as follows: exceed the last available rate at the receiver Ar (tk ), which
is obtained through REMB messages from the receiver as
seen in Figure 1.
γi = γi−1 + (ti − ti−1 ) ∗ Ki ∗ (|mi | − γi−1 ) (4)
The value of the gain, Ki , depends on whether |mi | is 3. EXPERIMENTAL SETUP
larger or smaller than γi−1 :
(
Kd |mi | < γi−1 Media Media
Ki = (5) Packetization
Ku otherwise Source Encoder
2500
Gather Gather
statistics statistics no constraints
500
WebRTC WebRTC
0
Node 1 Node 2 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
Time (mm:ss)
2000 1350Kbps.
no loss
1500 5% packet loss
10% packet loss
1000 20% packet loss
Table 2: Changing latency sequence.
500 Minute Latency change (from - to) Steepness
0-1 0ms - 250ms exponential
0
00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 1-2 250ms N.A.
Time (mm:ss) 2-3 250ms - 0ms linear
3-4 0ms - 500ms linear
Figure 7: Data rate with packet loss. 4-5 500ms - 0ms exponential
a constant bandwidth, a more common scenario is that these 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
we look at how fast WebRTC adapts to new conditions. We 1000 set RTT
RTT #1
simulate this behavior by changing the network constraints RTT #2
RTT(ms)
3000
0
set data rate
2500 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
data rate #1
Time (mm:ss)
Data rate (kbps)
data rate #2
2000
1500
Figure 9: Data rate and resulting RTT when con-
1000 tinuously changing the latency for both nodes.
500
The actual round trip time is close to the set data rate.
0
00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Unlike other experiments, we notice that the data rate is sig-
Time (mm:ss) nificantly different for both parties even though additional
latency is added in both directions. As expected, the data
Figure 8: Data rate with changing bandwidth for rate climbs up to the maximum data rate when latency is
both nodes. decreased (at minute 2 and 4) or kept constant (minute 1).
More unexpectedly, the GCC does not seem to kick in until
In Figure 8, we cap the available bandwidth for a minute after 40 seconds even though the RTT is increasing exponen-
consecutively at 1000Kbps, 3000Kbps, 500Kbps, 1500Kbps tially. This is presumably due to ramp-up function described
and 1000Kbps. In this scenario, the bandwidth utilization is earlier, which allows WebRTC to reach the maximum data
77% of the available bandwidth, which is slightly less than rate faster. We observe that GCC responds actively to the
the 80% bandwidth utilization when the available band- RTT transition at minute 3, where a decreasing and subse-
width is not changed. This is mostly due to the delay it quently increasing RTT results in a large drop in data rate.
takes to reach a steady bandwidth when more bandwidth In addition to studying the effect of different packet loss
becomes available at minute 1:00 and 3:00 where, respec- values, we also consider how packet loss that changes during
tively, 16 and 18 seconds are needed. As seen in Equation the lifespan of a call affects the call characteristics. Here we
(1), this delay confirms what we expect from GCC, since the change the packet loss every minute from 10%, 0%, 5%, 7.5%
bandwidth increases linearly with a factor 1.05 when under- and 15% as shown in the bottom graph of Figure 10. The
use is detected. This is because REMB messages are sent resulting data rate is shown in Figure 10. The results are
3000 3000 set data rate #1
1000
1000
0
0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (mm:ss)
Time (mm:ss)
set packet loss 3000
60 packet loss #1
packet loss #2 set RTT
Packet loss (%)
RTT(ms)
2000 RTT #1
40 RTT #2
1000
20
0
00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
0
00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (mm:ss)
Time (mm:ss)
are other TCP/UDP flows active that share the same bot-
comparable to what we observed in Section 4.1. A packet tleneck. It has been shown in previous measurement studies
loss of 5% and 7.5% only slightly drop the data rate (minute that in the presence of concurrent TCP flows, WebRTC’s
2 and 3), whereas a packet loss >= 10% reduces the data UDP streams could starve due to less aggressive congestion
rate significantly. It takes approximately 30 seconds to reach control [10].
the maximum data rate when packet loss is removed after Recently, Google Congestion Control has been updated to
the first minute. This is consistent with our expectations include an adaptive threshold (γ), with the aim of guaran-
according to the 5% increase in data rate when packet loss teeing fairness when competing with concurrent flows [10].
is less than 2% (equation (6)), as set by GCC. The data In this section, our goal is to evaluate the impact of the
rate increases with 5% every second for 30 seconds, coming adaptive threshold on fairness. We first evaluate the perfor-
down to 550Kbps ×1.0530 ≈ 2400Kbps, which is close to the mance of a single WebRTC stream which is competing with
reached 2500 Kbps shown at minute 1. other WebRTC streams while sharing the same bottleneck
Lastly, we study the effects of changing both the latency link. Next, we conduct the same test with competing TCP
and the available bandwidth. We simulate the effect of hand- flows.
off, which for instance occurs when a cellular receiver moves
from one Base Station to another. For this experiment, we 2000
call #1 call #3
call #2 aggregated bw
also limit the available uplink and downlink bandwidth dif-
ferently, since it is common for the uplink rate to be lower
Data rate (kbps)
1500
than the downlink counterpart. The experimental procedure
is shown in Table 3. 1000
500
Table 3: Experiment procedure for changing both
0
latency, uplink and downlink bandwidth. 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00
Minute Round trip time Downlink Uplink Time (mm:ss)
0-1 60ms 3000Kbps 3000Kbps
1-2 200ms 750Kbps 250Kbps Figure 12: Distribution of bandwidth across three
2-3 500ms 250Kbps 100Kbps WebRTC calls.
3-4 150ms 1250Kbps 500Kbps
4-5 200ms 750Kbps 250Kbps We first limit the available bandwidth to 2000Kbps and
test how the available bandwidth is distributed when three
The resulting data rates and latencies are shown in Fig- WebRTC calls share this bottleneck. To test how fast the
ure 11. We notice that the bandwidth utilization is 69% of congestion control algorithm adapts, we stagger the start
the available bandwidth, which is significantly lower than time of calls. We start with one call, add a second call af-
the 77% bandwidth utilization (Figure 8) when there is no ter one minute, and add a third call after 2 minutes. To
additional latency. The limited bandwidth also results in ad- see how fast WebRTC adapts once bandwidth is freed, we
ditional latency, especially when the bandwidth is extremely drop the third call in minute 4. The results of this exper-
limited (250Kbps downlink / 100Kbps uplink) at minute 2 iment are shown in Figure 12. The cumulative data rate
when the RTT increases to more than 2 times the value it is 78%, which is comparable to our earlier measured band-
was set. width utilizations (Figures 5 and 8). We see that the data
rate momentarily drops when a new stream enters or leaves
4.3 Cross traffic the bottleneck (minute 01:00, 02:00 and 04:00). The data
WebRTC traffic competes with cross traffic when there rates converge subsequently to their fair share value, but the
time duration for convergence is almost a minute when two from the other peers, where n equals the number of partici-
streams compete and even longer with 3 streams. The Jain pants. The results for 2-, 3-, and 4-person meshed calls are
Fairness Index values in the case of two streams and three shown in Figure 14. The data rates in this graph show both
streams are 0.98 and 0.94, respectively. Since both scores the average uplink and download data rates. The rates for
are close to 1, fairness is maintained. 3-person calls are close to two times the rates of 2-person
calls (factor 2.03). Surprisingly, 4-person calls have less than
2000
3 times the rate compared to 2-person calls (factor 2.77),
WebRTC flow mostly due the long startup delay. The rate is also very
TCP flow
volatile compared to the other calls, which maintain a con-
1500 stant data rate even though we averaged out several 4-person
Data rate (kbps)
a ten-minute competing TCP flow at minute 01:00. The re- 10000 downlink rate
4.4 Multi-party topology comparison By introducing an extra server to forward the streams,
In this section, we compare the performance of several we can reduce the necessary uplink bandwidth. By utiliz-
topologies that can be used for multi-party video conferenc- ing a server as a Selective Forwarding Unit (SFU), all the
ing. We evaluate 2-person, 3-person, and 4-person video participants only have to upload their stream once and the
conferencing for these topologies. SFU forwards these to the other participating clients. This
approach introduces extra latency, because streams are re-
layed, but significantly reduces both CPU usage (for en-
10000 2 persons
3 persons coding all streams) and necessary uplink bandwidth. The
4 persons
results are shown in Figure 15. Compared to meshed calls,
8000
it takes significantly longer to reach a stable data rate (30
Data rate (kbps)
6000
seconds vs. 15 seconds). For a 3-person call, the average
downlink rate is 2.00 times higher than the uplink rate. For
4000
a 4-person call, the downlink rate is 2.95 times higher.
H.264
400 VP8
200
VP9 iOS and Android mobile platforms is limited to a resolution
limit
of 640x480 at 30FPS, even though their cameras are able to
0
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 handle higher resolutions. Furthermore, iOS and Android
Time (mm:ss)
5 behave similarly across all characteristics. Their data rates
Resolution (pixels)
×10
10
are both significantly less than Chrome when there is no con-
H.264
5
VP8 gestion (1750Kbps vs. 2500Kbps) and their average RTT is
VP9
much higher. It also takes longer for both mobile platforms
0 to reach the maximum data rate when compared to Chrome
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00
Time (mm:ss) H.264
(20 seconds vs. 10 seconds).
VP8
Framerate (FPS)
60 VP9
40
Table 6: Average call characteristics for different
20
platforms.
0
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 Chrome iOS Android
Time (mm:ss) Data rate (Kbps) 1237.8 1022.5 1047.3
RTT (ms) 80.0 95.4 100.0
Framerate (FPS) 42.96 27.9 27.8
Packet loss (%) 2.30 2.18 2.2
Figure 16: Average data rates, RTT, resolution, and Resolution (pixels) 1006x566 602x339 675x380
framerate for different video codecs when the net-
work varies according to Table 4.
7
https://github.com/eface2face/cordova-plugin-iosrtc
used 5GHz channels to minimize the interference from other
Chrome IEEE 802.11 networks. To emulate the conditions of high
Data rate (kbps)
4000
iOS
Android loss environments, the AP transmission power was set to
2000 limit
1mW. We experiment with different channel conditions with
the wireless node being in the same room as the AP (ap-
0
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 proximately 5 feet away), as well as outside of the room
Time (mm:ss) (approximately 25 feet away).
1000 Table 7 shows average call statistics for two fully-wired
RTT (ms)
Chrome
iOS calls with one wired node located in the NYC area in the lab
500 Android
limit
and the other node in Oregon or Sydney. The NYC node
0 was injecting a video encoded at 50FPS, and the remote
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00
Time (mm:ss)
nodes were using a video encoded at 60FPS. The average
6 RTTs for the Oregon and Sydney calls were 77.74ms and
Resolution (pixels)
×10
4
Chrome 214.86ms, respectively. Accordingly, we term these scenar-
iOS
2
Android
ios as “medium” and “high” call latencies as compared to
“short” latency scenario with both nodes in the NYC area.
0
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 These results establish a baseline performance of WebRTC
Time (mm:ss) in realistic network conditions.
Chrome
Framerate (FPS)
100 iOS
Android
50 Table 7: Baseline statistics of wired calls with dif-
fering RTTs.
0
00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 Frame Frame
Call Path Data Rate Rate Width
Time (mm:ss)
NYC to Sydney 2971.11 49.58 1278.39
Sydney to NYC 2352.66 58.51 1280.00
NYC to Oregon 3001.45 49.68 1280.00
Figure 17: Average data rates, RTT, resolution, and Oregon to NYC 2305.43 58.47 1242.83
frame rate for iOS, Android and Chrome network
when the network varies as shown in Table 4. Next, we perform video calls with one wireless node and
the other node either being a local wired node or one of
the two remote nodes. A 720p video encoded in 50FPS
6. WIRELESS PERFORMANCE was used across all 3 cases. On the wireless node, the cam-
In this section, we evaluate the performance of WebRTC era on the Nexus tablet was used as video source, because
over real networks. We specifically focus on studying the video could not be injected into the Android distribution of
impact of a WiFi hop on WebRTC. Chrome without rooting the device.
Figure 18 depicts the data rate, frame rate, frame width,
6.1 Benchmarking and the RTT for a single call with high latency between a
In Section 4, we observed that GCC is sensitive to changes server located in Oregon area and a wireless node in the
in latency and packet losses. Transmitting over wireless net- lab. For comparison, we also show the performance of a
works may result in bursty packet losses and dynamic la- fully wired call in a similar scenario. Adding a wireless hop
tencies due to subsequent retransmissions, especially if the in typical indoor conditions creates a significant change in
end-to-end Round Trip Time (RTT) of the WebRTC con- RTT characteristics. We observe that the peaks in RTT at
nection is large. In this section, we characterize the effects of 20, 30, and 50 seconds correspond to drops in frame rates,
wireless links on the performance of WebRTC by comparing which lead to poor video quality for the user. Furthermore,
against the performance on wired links. we observe these RTT peaks to persist even after frame rates
We consider 3 types of WebRTC nodes: (i) a local wireless and data rates drop.
node, (ii) a local wired node, and (iii) remote wired nodes. A comparison of packet inter-arrival times between a wired
We used a 2013 ASUS Nexus 7 tablet as a local wireless and wireless call is shown in Figure 19. Further, Figure 19
node connected to an IEEE 802.11 DD-WRT enabled Access effectively illustrates how the wireless hop changes the delay
Point (AP). The wired node is either a local machine located variation di (according to (3)) used by GCC’s arrival-time
in our lab in New York City or a remote server running in filter8 . In all our experiments, the number of packet losses
Amazon EC2 cloud. We consider two cases for the remote was relatively low (packet losses are handled by retransmis-
server: one in the AWS Oregon availability zone and one in sions). Thus, the large variation in packet inter-arrival times
the AWS Sydney availability zone which provide different generally results in variations in video quality since GCC re-
magnitudes of RTT. This allows us to study the impact of lies on packet inter-arrival times for congestion control.
higher RTT as compared to the local machine.
Both the local and remote machines run Ubuntu 14.04 Figure 20 shows performance results of experiments for
with Google Chrome 57.0 as the browser. We use the same the near and far scenarios. Although the calls are two-way,
injected video files for a fair comparison. Moreover, all the the figures depict call performance statistics for the data
machines have sufficient computational power to eliminate received at the wireless node. Each result is an average of
the impact of devices on video performance. A virtual dis- four identical experiments of 200 seconds each. Increasing
play buffer was used on the EC2 servers to run WebRTC 8
The impact of packet inter-departure time is minimal and
on Chrome in headless mode. For the wireless node, we we exclude that from our calculations.
Figure 19: Comparison of time delta characteristics
for a wired call (left) and a call with a wireless hop
(right).
Figure 20: Experimental results for calls with the wireless node at a static position near (same room) and
far (outside of room) from the AP: (a) the average frame rate, (b) average frame width, (c) average RTT,
and (d) packet loss.
niques such as MAC layer retransmits or PHY layer rate versions of WebRTC in their performance analyses. [2] an-
adaptation. In WebRTC’s current implementation, how- alyzes the Janus WebRTC gateway focusing on its perfor-
ever, we observe that GCC is too sensitive to packet loss mance and scalability only for audio conferencing in multi-
to benefit from MAC-layer retransmission adaptation. A party calls. [8] focuses on comparison of end-to-end and
future direction is to study the impact of lowering PHY AQM-based congestion control algorithms. [7] evaluates the
layer transmission rates to guarantee successful packet de- performance of WebRTC over IEEE 802.11 and proposes
livery at the expense of reduced bandwidth. Figure 22 shows techniques for grouping packets together to avoid GCC’s
the PHY transmission rate when the wireless node is near action on bursty losses.
or far from the AP. The PHY transmission rate is usually [10] presents the design of the most recent version of the
higher than the minimum transmission rate (6 Mbps). Re- GCC algorithm used in the WebRTC stack. While [10] pro-
ducing the PHY transmission rate may reduce the number vides preliminary analysis of GCC in some synthetic network
of packet losses while still ensuring sufficient bandwidth for conditions, it does not focus on WebRTC’s performance on
the WebRTC call. mobile devices or real wired and wireless networks. Its main
focus is on inter-protocol fairness between different RTP
streams and RTP streams competing with TCP flows.
7. RELATED WORK [23] provides an emulation based performance evaluation
Performance evaluation and design of congestion control of WebRTC. However, all flaws identified in [23] have been
algorithms for live video streaming have received consider- subsequently addressed in WebRTC. For instance, the data
able attention. Below, we highlight the most relevant work. rate no longer drops at high latencies (but instead responds
Congestion control for multimedia: TCP variants such to latency variation), the bandwidth sharing between TCP
as Tahoe and Reno [16] have shown to lead to poor perfor- and RTP is fairer due to the newly introduced dynamic
mance for multimedia applications since they rely only on threshold, and the available bandwidth is shared more equally
losses for congestion indication. The approaches to address when competing RTP flows are added.
the shortcomings of these techniques can be divided in two A more realistic performance study using real network
categories. effects is done in [13], where the performance of WebRTC is
The first variety of congestion control algorithms use vari- measured with mobile users in different areas. Even though
ants of delay to infer congestion. Delay based variants of the WebRTC implementation used is outdated, the paper
TCP such as Vegas [5], and FAST [24] rely on measuring suggests that WebRTC’s over-reliance on packet loss signals
round trip delays but they are more reactive than proactive leads to under-utilization of the channel due to mobility.
in congestion control. LEDBAT [22] relies on measuring one
way packet delays to ensure high throughput while minimiz-
ing delays. Sprout [25] utilizes stochastic forecasts of cellular 8. LESSONS LEARNED
network performance to achieve the same goals. The sec- We believe that our evaluation and insights derived from
ond category of congestion control relies on Active Queue it can serve as a useful guide for developers of applications
Management (AQM) techniques. NADA [27] uses Explicit leveraging WebRTC. While we have done an extensive eval-
Congestion Notifications (ECN) and loss rate to obtain an uation of the performance of GCC and WebRTC in a wide
accurate estimate of losses for congestion control. variety of environments, there are several open issues and
WebRTC congestion control: SCReAM [17] is a hybrid directions for future research.
loss and delay based congestion control algorithm for conver- The new changes in the GCC algorithm include an adap-
sational video over LTE. FBRA [19] proposes a FEC-based tive threshold for congestion control. Our evaluations show
congestion control algorithm that probes for the available that this ensures better fairness between competing WebRTC’s
bandwidth through FEC packets. In the case of losses due RTP and TCP flows than reported in earlier studies. How-
to congestion, the redundant packets help in recovering the ever, optimal fairness is still not achieved and the adap-
lost packets. tive threshold prioritizes WebRTC’s RTP flows more aggres-
WebRTC performance evaluation: Several papers have sively than desired.
studied the performance of WebRTC. Most related work fo- We compared the performance of a mesh and Selective
cuses on a single aspect of the protocol or use outdated Forwarding Unit (SFU) based topologies for group video
Figure 22: Comparison of PHY data rate for tablet
positioned “Near” (left) and “Far” (right) from the
AP.
10. ACKNOWLEDGEMENTS
We would like to thank Rodda John, Columbia Univer-
sity, for his help in implementing scripts to analyze wireless
performance data. This work was supported in part by NSF
grants CNS-1423105 and CNS-1650685.
Figure 21: Experimental results for a wireless call
between NYC and Sydney with high (above) and 11. REFERENCES
low (below) MAC layer retry limits.
[1] One-way transmission time. ITU-T, G.114 (May
2003).
calls using WebRTC. Our evaluation shows that adding an [2] Amirante, A., Castaldi, T., Miniero, L., and
SFU can significantly improve the performance of multi- Romano, S. P. Performance analysis of the janus
party video call. The positioning and dimensioning of SFU webrtc gateway. In Proc. ACM AWeS’15 (2015).
in the network are some interesting future research direc- [3] Ammar, D., De Moor, K., Xie, M., Fiedler, M.,
tions. and Heegaard, P. Video QoE killer and
Our experiments demonstrated that the newly added H.264 performance statistics in WebRTC-based video
and VP9 codecs do not perform as expected in the presence communication. In Proc. IEEE ICCE’16 (2016).
of congestion or packet losses. It is not immediately clear if [4] Bergkvist, A., Burnett, D. C., Jennings, C.,
this performance issue is due to codec design or an imple- Narayanan, A., and Aboba, B. Webrtc 1.0:
mentation flaw and requires further investigation. Real-time communication between browsers. online,
We experimentally evaluated video calls on WebRTC in 2016. http://www.w3.org/TR/webrtc/.
real networks, specifically focusing on wireless networks. Our [5] Brakmo, L. S., and Peterson, L. L. TCP Vegas:
experiments show that WebRTC can suffer from poor per- End to end congestion avoidance on a global internet.
formance over wireless due to bursty losses and packet re- IEEE J. Sel. Areas Commun. 13, 8 (1995), 1465–1480.
transmissions. [6] Carbone, M., and Rizzo, L. Dummynet revisited.
In future work, we will consider modifications to the GCC SIGCOMM Comput. Commun. Rev. 40, 2 (2010),
algorithm to improve its performance with bursty packet 12–20.
losses and large variations in RTT. Further, we will study [7] Carlucci, G., De Cicco, L., Holmer, S., and
more complex cross-layer approaches to address the perfor- Mascolo, S. Making Google congestion control
mance issues of WebRTC over wireless, including PHY-layer robust over Wi-Fi networks using packet grouping. In
rate adaptation and dynamic adaptation of retransmission Proc. ACM ANRW’16 (2016).
limits along with congestion control.
[8] Carlucci, G., De Cicco, L., and Mascolo, S.
Controlling queuing delays for real-time
9. CONCLUSION communication: the interplay of E2E and AQM
In this paper, we evaluated the performance of WebRTC- algorithms. ACM SIGCOMM Computer Commun.
based video conferencing, with the main focus being on the Rev. 46, 3 (2016).
Google Congestion Control (GCC) algorithm. Our evalua- [9] Chen, W., Ma, L., and Shen, C.-C.
tions in synthetic, yet typical, network scenarios show that Congestion-aware MAC layer adaptation to improve
(a) (b) (c) (d)
Figure 23: Experimental results for calls with MAC layer retry limits varied between the maximum and
minimum values on the AP: the average (a) frame rate, (b) frame width, (c) RTT, and (d) packet loss.
video telephony over Wi-Fi. ACM Trans. Multimedia [22] Shalunov, S., Hazel, G., Iyengar, J., and
Comput. Commun. Appl. 12, 5s (2016), 83:1–83:24. Kuehlewind, M. Low extra delay background
[10] Cicco, L. D., Carlucci, G., Holmer, S., and transport (LEDBAT). IETF RFC 6817, 2012.
Mascolo, S. Analysis and design of the google [23] Singh, V., Lozano, A. A., and Ott, J. Performance
congestion control for web real-time communication analysis of receive-side real-time congestion control for
(WebRTC). In Proc. ACM MMsys’16 (2016). WebRTC. In Proc. IEEE PV’13 (2013).
[11] Cicco, L. D., Carlucci, G., and Mascolo, S. [24] Wei, D. X., Jin, C., Low, S. H., and Hegde, S.
Understanding the dynamic behaviour of the google FAST TCP: motivation, architecture, algorithms,
congestion control for RTCWeb. In Proc. IEEE PV’13 performance. IEEE/ACM Trans. Netw. 14, 6 (2006),
(2013). 1246–1259.
[12] De Cicco, L., Carlucci, G., and Mascolo, S. [25] Winstein, K., Sivaraman, A., Balakrishnan, H.,
Experimental investigation of the google congestion et al. Stochastic forecasts achieve high throughput
control for real-time flows. In Proc. ACM SIGCOMM and low delay over cellular networks. In Proc.
FhMN’13 (2013). USENIX NSDI’13 (2013).
[13] Fund, F., Wang, C., Liu, Y., Korakis, T., Zink, [26] Yiakoumis, Y., Katti, S., Huang, T.-Y.,
M., and Panwar, S. S. Performance of DASH and McKeown, N., Yap, K.-K., and Johari, R.
WebRTC video services for mobile users. In Proc. Putting home users in charge of their network. In
PV’13 (2013). Proc. ACM UbiComp’12 (2012).
[14] Hardie, T., Jennings, C., and Turner, S. [27] Zhu, X., and Pan, R. NADA: A unified congestion
Real-time communication in web-browsers. online, control scheme for low-latency interactive video. In
2012. https://tools.ietf.org/wg/rtcweb/. Proc. IEEE PV’13 (2013).
[15] Homer, S., Lundin, H., Carlucci, G., Cicco,
L. D., and Mascolo, S. A Google congestion control
algorithm for real-time communication. IETF draft,
2015. https:
//tools.ietf.org/html/draft-ietf-rmcat-gcc-01.
[16] Jacobson, V. Congestion avoidance and control. In
Proc. ACM SIGCOMM’88 (1988).
[17] Johansson, I. Self-clocked rate adaptation for
conversational video in LTE. In Proc. ACM
SIGCOMM CSWS’14 (2014).
[18] Mukherjee, D., Bankoski, J., Grange, A., Han,
J., Koleszar, J., Wilkins, P., Xu, Y., and
Bultje, R. The latest open-source video codec VP9 -
an overview and preliminary results. In IEEE PCS’13
(2013).
[19] Nagy, M., Singh, V., Ott, J., and Eggert, L.
Congestion control using FEC for conversational
multimedia communication. In Proc. ACM MMSys’14
(2014).
[20] Nam, H., Kim, K.-H., and Schulzrinne, H. QoE
matters more than QoS: Why people stop watching
cat videos. In Proc. IEEE INFOCOM’16 (2016).
[21] Schulz-Zander, J., Mayer, C., Ciobotaru, B.,
Schmid, S., Feldmann, A., and Riggio, R.
Programming the home and enterprise WiFi with
OpenSDWN. In Proc. ACM SIGCOMM’15 (2015).