-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extremely high packet loss with UDP test #296
Comments
Frequently this (lossy or slow UDP tests) can be the result of using the socket buffer size. Try using the |
I did not change the socket buffer size. On the same system, with the same socket buffer size, iperf2 works fine, but iperf3 shows a huge amount of packet loss, so isn’t it more likely that there is a bug in the UDP tests of iperf3? Shouldn’t iperf3 work with default parameters at least as good/accurate as iperf2 did? Can you reproduce this issue? If not, please try on a multi-core ARM based system, for example Raspberry Pi 2 or something similar. I was using a Freescale i.MX6Q.
|
I don't have any multi-core ARM systems available, and these are not platforms that we officially support.. Please try increasing the socket buffer size. I believe that iperf2 and iperf3 use different defaults. |
Hi Bruce, I can confirm what Clemens sees. Actually it doesnt matter if I use a IMX6, OMAP or a x86 machine. Even with target bandwidth of a few MBit/s, I get packet loss of at least 3-4% up to 25%. Unfortunately setting the windows/buffer size doesnt change that (tried 500K, 1M, 2M). I can observe the behaviour even when running server and client on the same machine. |
I observe same issue using two devices connected to switch (consumer grade, not professional) when sending device is connected through 1000BASE-T and server device is connected through 100Base-TX with bandwidth set to anything higher than 10M. Disabling autonegotiation on client side and forcing speed 100Mbit/s Full Duplex resulting in no package loss (or neglible) even with bandwidth like 90M. Is it possible that iperf3 on fast machine sending most data at the begining of each iteration causing to buffer overflow on switch? I think iperf2 is delaying writes so that data is send more evenly in time frame. EDIT: Sending machine has i7 onboard and I tested using both on Windows and Linux. |
I have met an issue just like this. In the same network connection, use udp. iperf2 is ok, but iperf3 occurred a very high packet loss above 95%. Then I add an option -l in iperf3, it worked well. For iperf2, no -l option added. |
confirm. UDP test report on server side seems wrong. |
It looks like having fq_codel as qdisc does improve the situation. Can you confirm that setting I really like the idea from @folsson (in #386) to reduce the interval of the throttling algorithm to 1ms. At the moment it is still at 100ms: https://github.com/esnet/iperf/blob/master/src/iperf_api.c#L1203 |
Cannot confirm fq_codel provides an improvement. However using plain |
Whilst I haven't seen this issue with iperf 3, it amounts to a ground-up rewrite compared to iperf 2. The UDP code there is normally serviced by an independent thread. This can have an effect on how responsive the pacing of packet generation is; iperf 2 calculated its inter-packet times using floating point. I noticed bwping underfilling the pipe in each time window; it was using select() and integer math only to compute delays/mean packets to send. I suspect some of the underlying variables have changed in iperf 3 given the rewrite. In experimental setups with iperf 2, I've never used anything other than a first-come, first-served queueing discipline. |
Pages 197-199 of my PhD thesis https://hdl.handle.net/10023/8681 describes some of these differences. It is known that IP multipath probably requires a different approach from what iperf has now for accurate measurements, but that's really out of the scope of this specific issue. However, if the packet sourcing is bursty, that text sheds light on EXACTLY where to look. The most annoying thing about iperf 2 is how it is written in a broad C++ style, trying to treat UDP sockets like TCP ones as regards binding. |
@legraps Maybe TCP is able to run twice as fast because using the TCP sliding window protocol and Window Scale Option reduces RX FIFO overflows in the i.MX6 Ethernet MAC. There is an erratum (ERR004512) mentioning RX FIFO overflows. See http://cache.nxp.com/files/32bit/doc/errata/IMX6DQCE.pdf |
@clemensg I'm aware of that errata and have actually observerved that using switches without PAUSE frames support leads to much worse troughput and loss in both TCP and UDP traffic. So apparently the Ethernet flow control (which works independently of the layer 3 protocol) improves the situation (for imx6). But I still wonder why the TCP test appears to be less CPU intensive than simple UDP. Is counting/verifying the UDP packets in user space more expensive than the kernels TCP implementation? |
@legraps Yes, I observed the same thing. We only buy switches with support for IEEE 802.3x Flow Control from now on. The pause frames reduce the RX FIFO overflows, but this is merely treating the symptoms. I am not entirely sure if the real cause is in the ENET MAC IP from Freescale or in the fec driver on Linux or both. Hm, good question: Maybe there are less context-switches when using TCP due to the in-kernel implementation of TCP? If you enter |
Seeing this also, iperf3, version 3.0.11, Ubuntu14.04 + update/upgrade. Here is an example, note the 50%+ UDP datagram loss:
Here is a session between the same two hosts, same network, same everything, but the TCP results are solid, while the UDP results show insane datagram loss.
|
Lots of very good analysis here: Looks like we have to set both the length of the receive buffer (-l) and the bandwidth target: ex: -l 8192 -b 1G Trying this did improve my UDP performance quite a bit. The other option that might help is the --zero-copy option, which I have not tried yet. |
I am also facing the same issue. I have tried all the option like including -l, -Z, -A , but all are giving same result , a huge packet loss. IN one direction packet loss is around 30 % and in other direction it is around 97%. Really confused what is wrong here. Same scenario is working fine on iperf2. |
Hi All |
This seems to be related to the iperf3 design somehow. We see thing consistently with perfSONAR, and recommend using nuttcp for UDP tests instead. |
@bltierney hey there.. THANK YOU so much. that worked... nuttcp can be installed as package on linux and yes it does work very very well. ESNET uses it so its pretty cool. @purumishra - i think u should use it as well for udp testing. remember to use the -v for verbose as that will also show u the buffer length. link - http://nuttcp.net/nuttcp/5.1.3/examples.txt |
I have another question... how tocheck for jitter in nuttcp.. |
We note that iperf 3.1.5 and newer contain a fix for UDP sending defaults (the old defaults resulted in a too-large packets needing to be fragmented at the IP layer). That can account for some of the problems seen in this thread. Closing for now, please re-open or file a new issue if this problem persists. |
This one hit me pretty hard. Massive packet loss over UDP (over the internet) with iperf3. 0 packet loss with iperf2. I'd like to test that fix in 3.1.5. Unfortunately all the builds on the site stop at 3.1.3. I can build it on mac but I'm not set up to build it on windows. Can someone provide a windows build of 3.1.5 or later? |
Dragonfax, because Iperf3 seems to send ALL the packets every 1/10 of a second then it depends on the transmit speed of the sender/server. |
Note that in iperf-3.2 (which is the current version), it doesn't do these massive bursts every 0.1 second anymore...the default is to send packets on 1ms boundaries (which should be somewhat less bursty) but you can also tune the granularity of the timer with the |
Will someone build 3.2 for windows, then? Its not included on the releases
page, and I don't have the capability to build for windows where I am.
…On Tue, Jul 18, 2017 at 1:09 PM Bruce A. Mah ***@***.***> wrote:
Note that in iperf-3.2 (which is the current version), it doesn't do these
massive bursts every 0.1 second anymore...the default is to send packets on
1ms boundaries (which should be somewhat less bursty) but you can also tune
the granularity of the timer with the --pacing-timer option.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#296 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA4_rhovLKKYc7NF0hVZL23sJEmrhHwDks5sPRD-gaJpZM4FwWYf>
.
|
The place to ask this question (which is not a bad one BTW) is probably on the iperf-dev or iperf-users lists. (I don't do Windows, but I had the impression it's a pretty straightforward build on Cygwin.) |
Try these parameters. -w10000 (10k buffer size) A problem with traffic-shapning seems to be with smaller packets (for ex. -l100) |
Tested UDP packet loss on windows with 3.1.3 and it is still broken. Packet loss is not reported correctly. It is way too high. EDIT: Oh, I read though some comments on iperf and it seems that this tool has never worked correctly in different regards. For example this comment shows other problems (https://arstechnica.com/civis/viewtopic.php?t=1113215). So the best thing to do is to avoid this unreliable software at all. @DennisEdlund Tried what you suggested. Doesn't work. 80-95% packet loss. |
Hi,@bmah888. Is there an option for tuning this UDP sending packet size ? |
@wangyu- : Try If you have other questions, it's probably best to post in the [email protected] mailing list, rather than adding a comment to a closed issue. |
Thank you very much.
Got it. |
I encounter the same issue, and also with imx6. but running with iperf, on the exact same HW configuration, results much lower results. So I am not sure, is it a real ethernet loss packets with imx6, or is it a bug in iperf3 ? clemensg , is it an imx6 issue ? |
Hi @ranshalit, there is a problem in the i.MX6 FEC that results in FIFO overflows. This can be mitigated by using a switch with IEEE 802.3x flow control enabled. |
I am using a iperf3.6 now on both server and client and still I could see around 50~60% packet loss as below :- iperf3 -c 11.1.201.2 -R -P 4 -u -b 0 -p 52014 -l 1440connected to kernel driver /dev/iperf0 [ 6] 1.00-2.00 sec 30.5 MBytes 256 Mbits/sec 0.091 ms 43784/65986 (66%) [ 6] 2.00-3.00 sec 30.6 MBytes 256 Mbits/sec 0.108 ms 44898/67146 (67%) [ 6] 3.00-4.00 sec 30.5 MBytes 256 Mbits/sec 0.048 ms 44826/67013 (67%) [ 6] 4.00-5.00 sec 30.6 MBytes 257 Mbits/sec 0.069 ms 44929/67220 (67%) [ 6] 5.00-6.00 sec 29.3 MBytes 245 Mbits/sec 0.097 ms 45828/67137 (68%) [ 6] 6.00-7.00 sec 28.9 MBytes 243 Mbits/sec 0.081 ms 47599/68672 (69%) [ 6] 7.00-8.00 sec 30.3 MBytes 254 Mbits/sec 0.033 ms 45840/67882 (68%) [ 6] 8.00-9.00 sec 29.8 MBytes 250 Mbits/sec 0.051 ms 46551/68218 (68%) [ 6] 9.00-10.00 sec 30.5 MBytes 256 Mbits/sec 0.121 ms 46190/68423 (68%) [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams i have tried other options like "-w" and even setting the "-b" option to a desired bandwidth but the max i get is around 1Gbps . I am running the test between two 10G linux servers. |
@gourabmajumdar : |
Hi,
iperf2 does not report much packet loss when receiving UDP traffic on an i.MX6 quad-core processor with fec ethernet driver on Linux (4.2-rc7):
iperf3 on the other hand:
These values can't be right, because both iperf2 and iperf3 TCP RX tests show a maximum bandwidth of over 200 Mbit/s. It only occurs in the UDP RX test with iperf3.
This happens on iperf3.1b3 and on iperf3.0.7 and looks like a serious bug in the iperf3 UDP test. Any idea what's going on?
Cheers,
Clemens
The text was updated successfully, but these errors were encountered: