Windows 10 40GbE Network

Discussion in 'Networking' started by zkrr01, Jul 5, 2018.

  1. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I finally got a 40GbE network connection between two Windows 10 8920 Dell Workstations where I have some Samsung 970 PRO SSD's. On initial tests I am able to transfer 500 GB's of data at 2120 MB/s using the standard Microsoft drag/drop interface. Since the SSD's spec's show 3500 MB/s read and 2700 MB/s write, the initial tests are a bit low since the overall transfer will be restricted by the write speed of 2700 MB/s.

    I am using a Mellanox MCX4131A-BCAT ConnectX-4 Lx EN Network Interface Card 40GbE Single-Port QSFP28 PCIe3.0 x8 ROHS R6 NIC in each Dell 8920 Workstation. I have jumbo packets enabled.

    Any recommendations on how to get the transfer speed from 2120 MB/s closer to the 2700 MB/s?
     
    #1
  2. cactus

    cactus Moderator

    Joined:
    Jan 25, 2011
    Messages:
    825
    Likes Received:
    76
    Map a drive and run something like Crystal Disk Mark(CDM) on the mapped drive. Also run CDM locally.
    I dont trust what Explorer is reporting and you have nothing to compare it to to say that you should be getting a full 2700MB/s.
     
    #2
  3. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,922
    Likes Received:
    851
    That's good if you're getting that without doing anything more.
     
    #3
  4. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    The numbers I provided was from CrystalDiskMark 6.0.0 x64. The exact numbers were 3496.9 MB/s for read and 2725.8 MB/s for write.
     
    #4
  5. cesmith9999

    cesmith9999 Well-Known Member

    Joined:
    Mar 26, 2013
    Messages:
    1,086
    Likes Received:
    332
  6. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,630
    Likes Received:
    388
    Explorer is limited by a single thread, try a tool that can copy multiple files simultaneously like robocopy. That should get you the max performance.
     
    #6
    MiniKnight likes this.
  7. cesmith9999

    cesmith9999 Well-Known Member

    Joined:
    Mar 26, 2013
    Messages:
    1,086
    Likes Received:
    332
    with robocopy there are a few tricks to make it copy faster
    1) /mt:xx - multi threaded copy - I usually maxed out to 2x the logical proc count...
    2) /ndl /nfl to suppress output to the console
    3) /log in place of /ndl /nfl logs output to a file.

    surprisingly, in tests I helped with a long time ago, #3 was the best option to speed up robocopy. as displaying the output was the biggest thing slowing robocopy down.

    Chris
     
    #7
    ecosse likes this.
  8. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I use the Windows 10 Hyper-V Manager to export each virtual machine to a directory on my primary machine and use the Microsoft drag/drop to copy this directory over to a backup machine. What is the recommended way to do this using robocopy? Examples would be useful.
     
    #8
  9. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I tried what you suggested and boy were you correct! I used the following:

    ROBOCOPY /MT /MIR /R:0 /W:0 /LOG:G:\files.log /NP /NDL \\Robin\g\HyperV-Exported-Systems \\Eagle\g\HyperV-Exported-Systems

    and is went a lot faster and is a much better way of doing it. Lots of experts on this forum! Thanks!
     
    #9
  10. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    Upon additional tests I determined Robocopy performed best when /MT:2 was used. The average speed I observed was 2500MB/s , the maximum possible was 2700MB/s, which is the rated Samsung 970 pro write speed.
     
    #10
  11. oddball

    oddball Active Member

    Joined:
    May 18, 2018
    Messages:
    140
    Likes Received:
    40
    Have you been able to hit line rates on 40Gbe in Windows?

    We have a 40Gbe backbone and Windows has some issues. I've been able to hit mid-30s Gbe with iPerf3 between Windows and Linux, but from Windows to Windows we're in the mid-20s.

    Any help would be appreciated. Windows just seems to have a slower network stack.
     
    #11
  12. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,630
    Likes Received:
    388
    Try NTttcp, it supports multithreading.
     
    #12
  13. oddball

    oddball Active Member

    Joined:
    May 18, 2018
    Messages:
    140
    Likes Received:
    40
    Wow, big improvement, 20Gb out of the box with default params. I need to mess with this, but a big improvement.
     
    #13
  14. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I also use NTttcp and with 4 threads, the throughput I was getting was 3456.496MB/s
    That was Windows 10 to Windows 10 with Dell 8920 workstations
     
    #14
  15. oddball

    oddball Active Member

    Joined:
    May 18, 2018
    Messages:
    140
    Likes Received:
    40
    Do you have a switch in between the computers?

    I'm doing Windows Server 2016 to Windows Server 2016. Both servers have a network team of 2x40Gbps through a mlag of Arista 7050qx-32s switches.

    The lag on Windows is switch independent, which I think is an issue. We're working to convert to LACP.
     
    #15
  16. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    No, we use a direct connect using a QSFP passive copper cable.
     
    #16
  17. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I changed the Jumbo Packet size to 9614 and I am now able to achieve 2700MB/s writing to the Samsung Pro SSD.
    I was using the following robocopy setup:
    ROBOCOPY /MT:2 /MIR /R:0 /W:0 /LOG:G:\files.log /NP /NDL
     
    #17
  18. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    I still have not been able to achieve the 5000MB/s that a 40GbE NIC is capable of. I am using ntttcp.exe for my tests. What does the experts on this forum recommend for the parameters for ntttcp to achieve maximum speeds?
     
    #18
  19. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,630
    Likes Received:
    388
    Multiple threadsshould be enough to get to the limit.
    >5000MB/s
    Wrong expecatation here, don't forget all the overhead of the different network layers. 3.8/3.9 gbyte/s is more realistic.
     
    #19
  20. zkrr01

    zkrr01 Member

    Joined:
    Jun 28, 2018
    Messages:
    106
    Likes Received:
    5
    The latest ntttcp shows 4716.485 MB/s throughput with 8 threads running is about what my two Dell 8920 workstations can do. I am also able to backup my Hyper-V systems to my Samsung 970 ssd's at their maximum read/write speeds.

    It would be informative to know what other Windows 10 workstations are getting with similar ntttcp parameters.

    ntttcp.exe -r -m 8,*,10.0.1.2 -rb 768K -a 16 -t 5
    ntttcp.exe -s -m 8,*,10.0.1.2 -l 128k -a 2 -t 5
    Jumbo Packet = 9614 bytes
    ------------------------------------------------------------
    ntttcp.exe -r -m 8,*,10.0.1.2 -rb 768K -a 16 -t 5
    Copyright Version 5.33
    Network activity progressing...


    Thread Time(s) Throughput(KB/s) Avg B / Compl
    ====== ======= ================ =============
    0 5.017 580145.904 65479.847
    1 5.017 537934.224 65503.380
    2 5.017 531517.640 65506.129
    3 5.017 639101.184 65322.872
    4 5.017 745127.367 65296.773
    5 5.017 512586.805 65518.064
    6 5.017 646493.123 65122.232
    7 5.017 637737.033 64858.267


    ##### Totals: #####

    Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
    ================ =========== ============== ================
    23667.321617 5.018 9370.271 4716.485


    Throughput(Buffers/s) Cycles/Byte Buffers
    ===================== =========== =============
    75463.760 2.396 378677.146


    DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
    ============= ============= =============== ==============
    24970.506 21.137 86968.912 6.069


    Packets Sent Packets Received Retransmits Errors Avg. CPU %
    ============ ================ =========== ====== ==========
    296325 2648481 0 0 41.141


    Edited on 7/31/2018 for latest ntttcp test.
     
    #20
    Last edited: Aug 2, 2018
Similar Threads: Windows 40GbE
Forum Title Date
Networking Mellanox 40gbe tuning windows? I have terrible performance. May 28, 2019
Networking Mellanox 40gbe windows configuration??? Nov 17, 2018
Networking 40gbe, 4x10gbe, Freenas, Windows 10, server 2016, and Switch Jul 26, 2018
Networking 10Gbe poor performance in windows Jun 12, 2019
Networking Mellanox Connect-IB on Windows Server 2019 May 17, 2019

Share This Page