10GbE network speed slow on Netgear XS716-T

Discussion in 'Networking' started by wizzackr, Sep 21, 2019.

  1. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    Hi, we have problems with our 10GbE network speeds and need help. This is the setup:

    There are ten workstations here (lenovo p700s with dual 14core xeons, 64gb ram), each with a Sun Dual Port 10GbE PCIe 2.0 Adapter (Intel x540-T2) Base-T (which we got for cheap on ebay). All the NICs have the default windows drivers installed.

    All cabling is rated CAT7 (https://www.amazon.de/gp/product/B07W7TFN3C/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1), none of the cables is longer than 15m. Finally, the switch in use is the Netgear ProSafe XS716-T with 16 ports, all ports' LEDs show 10GbE connectivity with the exception of the 1GbE cable that comes from the router/DHCP and one from an old license server. The switch is basically un-tweaked plain vanilla with the exception of jumbo frames/MTUs set to 9000 to match those set in the Nic and NAS.

    Now this is all fine and dandy BUT we cannot get any file to transfer across the network (tested from a Synology DS1718 NAS with 6 Exos X10s in RAID 10 to a local SSD) with speeds exceeding 2Gbit/s. ...kinda underwhelming and I am at a loss here with regards to what to check.

    I ran iPerf3 between four machines on the network (tested two connections between two different systems) to check if the network or the disc performance was the culprit and it seems to be the network: All connections max out at around 2Gbit/s on iperf, not even spikes above that. Ever.

    I triple checkd the NIC settings and in the switch there really is not much to get wrong... or is there? I am really at a loss here, so any help is greatly appreciated!
     
    #1
  2. altmind

    altmind Member

    Joined:
    Sep 23, 2018
    Messages:
    63
    Likes Received:
    13
    I've seen some reports that iperf3 being single threaded does not usilize full network bandwith.
    To fully load the network you have to use iperf3 in parallel Multithreaded iperf3 · Issue #289 · esnet/iperf

    [The question on SMB performance is much more pronounced and there are million of diferent, unrelated reasons for it to perform poorly]

    Do you have RSS and receive segment coalescing enabled for the network cards on the end nodes?
     
    #2
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,579
    Likes Received:
    541
    is it the same on RamDisk to RamDisk transfer?
     
    #3
  4. themindiswatching

    themindiswatching New Member

    Joined:
    Aug 27, 2019
    Messages:
    7
    Likes Received:
    4
    FWIW, my 2017 MacBook Pro gets ~8.5gbps down and ~6 up with only one stream and 1500 MTU (but with window size explicitly set to 512k). Without an explicit window size the upload caps out around 2-2.5gbps. This is testing to/from a FreeNAS box with some of the tuning mentioned here done and doesn't differ much if at all between direct connection and using a Netgear GS110EMX switch.

    Anyway, it would probably be able to do better if it weren't CPU limited (13", so kernel_task + iperf3 end up using the vast majority of the two available cores on upload).
     
    #4
  5. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    Thanks for the input already - much appreciated. With regards to the problem I still am at a loss:

    I did run ntttcp tests and it shows pretty much the same result: I hit a cap at pretty much close to 2Gbit/s. I also checked Get-NetAdapterHardwareInfo to make sure I did not mess up the PCIe slot I put the NIC into, but the info there seems fine as well:
    Slot04 is where the NIC sits and it is reported as 5.0 GT/s with a PCIeLinkWidth of 8 - see attached.

    What is odd though is that I copied a test file from the NAS to the local SSD and it again capped at 2Gbit/s. When I copied it BACK to the NAS, however, it reached transfer speeds of 540MB/s - about what I would expect when writing to a three disc RAID10 array, no? In that case I can also cancel my new cable order, as the cables seem to work fine...

    I also checked and re-installed the Intel network driver for the X540-T2 and disabled jumbo-frames on the NAS, switch and NICs to isolate the problem.

    Any other pointers?
     

    Attached Files:

    #5
  6. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    Ok, here are the latest findings. When running iperf3 on multiple threads (iperf3 -c 192.168.0.139 -w 640k -l 640k -P 4) and fixed window sizes I get a combined throughput of 8.9 Gbits/s, which I deem ok, right?

    On a single thread it never goes beyond 2Gbits/s - which is what I reach when copying files from within windows explorer from the NAS - when copying TO it I reach 550MB/s, which is clearly where the disc performance limits any more throughput.

    If I am not mistaken that rules out the switch, the cables and the NIC/drivers, right?
    So it leaves me with the Synology NAS and windows or am I missing something.

    Any ideas?
     
    #6
    Last edited: Sep 23, 2019
  7. altmind

    altmind Member

    Joined:
    Sep 23, 2018
    Messages:
    63
    Likes Received:
    13
    I would not rule out the driver misconfiguration. There are a lot of net driver options to config in both linux and windows.
    You may want to tune tcp window scaling, check that RSS, hardware offloads and coalescing is enabled on both sides and check that you are not using SMB 1.

    SMB is capable of 10gb transfers, but some tuning is required.
     
    #7
  8. Terry Kennedy

    Terry Kennedy Well-Known Member

    Joined:
    Jun 25, 2015
    Messages:
    1,020
    Likes Received:
    474
    I just want to address the "you need multiple threads to get a good iperf number" meme. This is on FreeBSD 12, x540-T1 cards, dual X5680 CPUs via a Dell Powerconnect 8024 switch, no iperf options at all used:

    Client:
    Code:
    (0:193) host1:~terry# iperf -c host2
    ------------------------------------------------------------
    Client connecting to host2, TCP port 5001
    TCP window size:  275 KByte (default)
    ------------------------------------------------------------
    [  3] local 10.20.30.61 port 63611 connected with 10.20.30.40 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  11.5 GBytes  9.90 Gbits/sec
    Server:
    Code:
    (0:2) host2:/sysprog/terry# iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 64.0 KByte (default)
    ------------------------------------------------------------
    [  4] local 10.20.30.40 port 5001 connected with 10.20.30.61 port 63611
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec  11.5 GBytes  9.88 Gbits/sec
     
    #8
  9. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,579
    Likes Received:
    541
    but thats not iperf3 is it? Looks like iperf2. O/c I don't use 3 much so might be mistaken :)
     
    #9
  10. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    Strange. Let me try iperf today. If it does indeed only work with multiple threads - what could be causing the abysmal performance on one?
     
    #10
  11. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    Ok, now this is odd: Running iperf2 on all default parameters is giving me exacltly the same cap: I always reach between 1.8 and 1.95Gbit/s max - exactly at what file transfers from the NAS top out. If I want to reach anything higher I have to run multiple threads parallel.

    Anyone have an idea what this could mean and what would need to be tweaked?
     
    #11
    Last edited: Sep 24, 2019
  12. wizzackr

    wizzackr New Member

    Joined:
    Sep 21, 2019
    Messages:
    6
    Likes Received:
    0
    One question: if I run ATTO disc benchmarks to the mapped drive on the NAS, do the reads go directly to the RAM on the machine I run the test on? Or do they somewhere get cached locally and the local SATA write speeds would be a limiting factor?

    I am asking because if I run a test as per above I can perfectly see good write speeds to the NAS (490 to 550MB/s) but reads are all capped at 200MB/s.

    I also narrowed it down to windows or NAS as we swapped the Intel NIC for an Asus one and have the same problem. Also bypassing the switch and connecting one workstation to the NAS directly yielded the same poor read performance.
     
    #12
  13. acquacow

    acquacow Active Member

    Joined:
    Feb 15, 2017
    Messages:
    464
    Likes Received:
    216
    Do you have SMB Multichannel enabled on the fileserver?

    I have an XS708-T at home and use it with windows and FreeNAS and max out 10GigE just fine over cat5.

    I had an issue with windows speeds being limited around 1-2Gbit, but these two articles cleaned that up for me:
    A word about AutoTuningLevel – TCP Receive Window Auto-Tuning Level explained

    Fix: Slow Internet After Windows 10 Creators Update

    There was a group policy setting that controlled my TCP window size and it was too small to max out 10gige.

    Things are peachy now.

    Copying to FreeNAS:
    [​IMG]

    Reading data back from FreeNAS:
    [​IMG]

    I gave up on DSM/Synology a while back because the DSM software and background services were eating all the CPU I needed for SMB transfers.

    -- Dave
     
    #13
Similar Threads: 10GbE network
Forum Title Date
Networking Help with home 10GbE network (10Gbase-T and SFP+) Nov 19, 2019
Networking DELL BROADCOM 57810S 10GBE BASE-T NETWORK ADAPTER Apr 4, 2019
Networking PFSense VMware High Performance 10Gbe Networking Oct 19, 2018
Networking 10Gbe Home Fibre Network Sep 19, 2018
Networking A small 10gbe home network. Need a advice. Aug 20, 2017

Share This Page