10Gig to Wifi slower than 1Gig to Wifi

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

antsh

New Member
Sep 14, 2017
26
6
3
I have a few Ubiquiti APs. When transferring from servers with 10gig NICs the speeds are slower than when transferring form computers with 1gig NICs (to clients).


—————————lots of details below————————


What I am seeing is that if I run a server with a 10gig nic, my wireless tranfsers are slower (capped at around 300mbps) than when I use the onboard intel 1gig nic. When using the 1gig nics my wireless transfers are at around 600mbps up and down. I saw this happen originally with an AC-Pro, and it is continuing to happen with my nanoHDs. I've ruled out pretty much everything, and it seems the only common factor is when transfering from a computer with a 10 gig nic. I will describe below all the different scenarios I tested (that I remember). I am wondering if anyone has any ideas? At this point I am thinking either some kind of offloading on the NIC is getting in the way, or some kind of flow control on the APs. Below all of the scenarios. When I say "Fast" I mean the expected 600mbps up/down; "Slow" means capped at 300mbps, though usually the from server to wireless client direction is the slowest. Also, all clients experience this MB PRO 3x3AC, iPhone 2x2AC, Dell Laptop 2x2AC.


Debian 9 (OpenMediaVault)--Intel X520 10g--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Slow
Debian 9 (OpenMediaVault)--Intel 1gig Nic (onboard Intel i350)--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Fast
Debian 9 (OpenMediaVault)--Chelsio T420 10g--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Slow
Debian 9 (OpenMediaVault)--Intel X520 10g--Mikrotik CCS 10gig Switch--AC-Pro/nanoHD=Slow
Debian 9 (OpenMediaVault)--Chelsio T420 10g--Mikrotik CCS 10gig Switch--AC-Pro/nanoHD=Slow
Debian 9 (OpenMediaVault)--Intel 1gig Nic (onboard Intel i350)--AC-Pro/nanoHD=Fast
Freenas 11--Intel X520 10g--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Slow
Freenas 11--Intel 1gig Nic (onboard Intel i210)--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Fast
Freenas 11--Chelsio T420 10g--Dell 5524 Layer 3 switch--AC-Pro/nanoHD=Slow
Freenas 11--Intel X520 10g--Mikrotik CCS 10gig Switch--AC-Pro/nanoHD=Slow
Freenas 11--Chelsio T420 10g--Mikrotik CCS 10gig Switch--AC-Pro/nanoHD=Slow
Freenas 11--Intel 1gig Nic (onboard Intel i210)--AC-Pro/nanoHD=Fast
Debian 9 (Chelsio T420) -- any swtich -- Freenas 11 (Intel X520)= Fast (9.5gigabit/s)


Note, that the Chelsio uses optical cables and the X520 uses DACs, so cables are ruled out too.


I am at a loss.
 

SlickNetAaron

Member
Apr 30, 2016
50
13
8
43
Are your 10gb interfaces on the same layer-2 network as the 1gb and WiFi? Jumbo packets enabled on 10g but not end-to-end, forcing fragmentation?
 
  • Like
Reactions: jkjk

antsh

New Member
Sep 14, 2017
26
6
3
Same layer 2 network. No jumbo packets for this test, so 1500 throughout. Will double check MTU though to make sure. Thanks for the feedback all.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
could this be related to packet pacing?
Packet Pacing

When sending from a faster host to a slower host, it is easy to overrun the receiver, leading to packet loss and TCP backing off. Similar problems occur when a 10G host sends data to a sub-10G virtual circuit, or a 40G host sending to a 10G host, or a 40G/100G host with a fast CPU sender to a 40G/100G host with a slower CPU. These issues are even more pronounced when using tools that use parallel streams, such as GridFTP. On some long paths (50-80ms RTT), we've seen TCP performance improvements of 2-4x after enabling packet pacing.
 
  • Like
Reactions: SlickNetAaron

antsh

New Member
Sep 14, 2017
26
6
3
could this be related to packet pacing?
This is probably on the right track. I am seeing this most pronounced on server->wireless client transfers (~300mbps). Going the other direction, wireless->server there is no degradation (~600mbps). Also, 10g<->10g transfers are fine at 9.5gbps, and even 10g<->1g wired transfers are pegged at a gigabit, so that is fine.

I suppose this might be a combination of linux network stack tuning (currently debian 9 on 4.15) and powerful TSO engines on the Intel x520 and Chelsio T420.

Does anyone have any experience tuning packet pacing or TSO parameters before I start hitting my head against the wall and probably break something?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
is this a single or dual proc server? and what are the RSS settings?

One of the things that I have to make sure of is that the RSS settings are on the Processor/PCIe bus that the NIC is attached to. other wise all calculation have to go over the QPI bus...

not really an issue with 1 GB NICs. but is a big issue with 10GB and higher NICs.

Chris
 

antsh

New Member
Sep 14, 2017
26
6
3
Single socket E5-2680V2, NIC in an x8 PCI-E V3 slot. The thing that is weird about this is that upload to the server from the wifi client is full speed. The reverse direction, 10gig to wifi is half as fast, and shows more retransmits in iPerf. I have a feeling it is an overly aggressive flow control tuning or the like somewhere in the chain.
 

oddball

Active Member
May 18, 2018
206
121
43
42
Are you overrunning your buffers on the switch?

You can fit 10x the frames on the 10Gbe link. If you're saturating it those frames need to be stored before they're forwarded at 1Gbps over wifi. My guess is the buffer on your switch can't keep up, so it starts to drop packets and that's when you notice performance issues.

There are 10Gbe-1Gbe switches with really deep buffers to handle this situation.

We have steps from 40Gbe to 10Gbe, and 10Gbe to 1Gbe, but our traffic is in bursts so the step down isn't an issue, if we were saturating the links between we'd start to have problems.
 

mgittelman

New Member
Mar 4, 2019
2
0
1
@antsh Did you ever find more answers or solution to this? I'm experiencing the exact same problem but with a Qnap 10gb switch. Setting the server to 1gbps rather than 10gbs fixes the problem and my wireless speeds go from 300mbps to 500mbps+.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Are you overrunning your buffers on the switch?

You can fit 10x the frames on the 10Gbe link. If you're saturating it those frames need to be stored before they're forwarded at 1Gbps over wifi. My guess is the buffer on your switch can't keep up, so it starts to drop packets and that's when you notice performance issues.

There are 10Gbe-1Gbe switches with really deep buffers to handle this situation.

We have steps from 40Gbe to 10Gbe, and 10Gbe to 1Gbe, but our traffic is in bursts so the step down isn't an issue, if we were saturating the links between we'd start to have problems.
What would be considered a deep packet buffer for a 10Gb switch? And assuming my switch (Cisco SG350XG-24F) has too small a packet buffer (2MB aggregate), is there anything one can do to mitigate this shortcoming?
 

mgittelman

New Member
Mar 4, 2019
2
0
1
Any more ideas on this? I got a 1gbps cisco switch to test with. As long as any servers or desktops with 10gb cards are connected to the 1gbps switch, or running at 1gbps on the 10gbps switch, my receive in perf3 on clients is over 500mbps. Otherwise it hits 300mbps and stays there. On a mac with 3X3 radio, I actually get like 300mbps down and 700mbps up. It's driving me crazy! Is this a flaw in drivers, switches, or just something inherent to stepping down traffic from 10gb to wireless in general? Why is it that wired clients running at 1gbps don't have this issue, but multiple ac APs do?