MNPA19-XTR Windows 10 speed/iperf issues

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

justinb

New Member
Sep 10, 2016
2
0
1
43
I've searched STH and google (freenas community, HardOCP lots of others) and still cannot solve my issues.

setup is Win2012R2 server VM in esx 5.5 -> Win 10 Ent client, ESXi is running latest esx 5.5 driver from mellanox firmware 2.9.1000, Windows box has tried both 4.85 and 5.22 drivers, both firmware's 2.9.1000 and 2.9.1200 with no difference.

ESXi is an i7 980X 32 GB ram - SSD Raid, max transfer 500MB/sec, Win 10 i7 5960K 16 GB ram m2 SSD, max transfer 1,500MB/sec, OC'd to 4.5Ghz. Both are using the MNPA19-XTR card with generic SFP+ and OM3 fibre.

Iperf ESXi Windows Srv -> Ubuntu live CD = 7.5Gbit/sec, which is fantastic, going from Srv - Win 10 client averages around 2.5gbit/sec.

I've done all the tweaks I can find, performance tuning, Rx/Tx buffers, jumbo (cards and virtual switch), disable offloads everything and it doesn't seem to make a difference. I'm all out of ideas, I have no clue what is limiting the bandwidth in Win 10. The only thing I can notice is iperf in Ubuntu (14.04) shows little to no CPU usage where in Windows its using one core to about 70%. Running parallel iperfs (either in separate windows or with -p) only splits the bandwidth.

Any help is greatly appreciated!
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,092
650
113
Stavanger, Norway
olavgg.com
Windows have always had slower TCP performance, it is especially noticeable when on 10G network. If you want excellent Windows performance, you need RDMA and applications/services that are RDMA aware.

Windows is very slow here too, though with iSCSI is performs really well! Much better than what I get with iperf.
 

justinb

New Member
Sep 10, 2016
2
0
1
43
I've checked that RDMA is enabled on the mellanox card in powershell and it is, I've disabled SMB 1.0 too.