40G networking (file copy still slow)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Roelf Zomerman

Active Member
Jan 10, 2019
147
27
28
blog.azureinfra.com
Hi everyone,

I have 2 Dell R620's each with 256GB memory. Both servers have an Mellanox ConnectX-3 and I connected both channels to each server, ensuring that both servers provide 40G on each slot port. In theory I should have an 80Gb throughput when using SMB multichannel.

In order to test this throughput, I created a RAM disk of 100Gb on each server. I also validated that smb-multichannel is active
1600168230232.png

When I copy some large files I do not get the speed however..
A 5Gb file copies with around 400-500MBps
1600168796512.png

An empty 20G txt file however does go up to 1200MBps
1600168828390.png

So I probably have to do some finetuning.. anyone know what the options are that I need to configure to speed this up?

I found the following link, but not sure what the equivalent options are within Windows Device Manager
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
Did you try a tool like robocopy?
What ramdisk software did you use? (I tried an open source solution a few years ago and the performance was worse than an actual iodrive2)
 

Roelf Zomerman

Active Member
Jan 10, 2019
147
27
28
blog.azureinfra.com
Did you try a tool like robocopy?
What ramdisk software did you use? (I tried an open source solution a few years ago and the performance was worse than an actual iodrive2)
I used this link to create a virtual iSCSI disk: How to Create a RAM Disk on Windows Server? | Windows OS Hub

I didn't use IPerf yet, i seems for that i need to put up 3 of more different streams anyway right?

It's running Windows Server 2016 now.. think 2019 would work better ?
 

j_h_o

Active Member
Apr 21, 2015
644
180
43
California, US
It's running Windows Server 2016 now.. think 2019 would work better ?
No, I had to do all kinds of tweaking with 2019. But only with some hardware. I'd stick with 2016.

iperf: It'd be good to try another protocol instead of "just" SMB - to confirm NIC and switch capabilities. iperf would also eliminate any disk bottlenecks.
 

Dreece

Active Member
Jan 22, 2019
503
161
43
Windows 2019 networking performance was a royal nightmare for many when it first hit the scene, and even now it is still a pain in the backside for many, the overheads with SMB for example are just the tip of the iceberg, it all comes down to hardware and configuration.

Unfortunately Windows has always suffered from being great with some hardware and not so great with others, default configuration permitting, and that info never makes it to prime-time, forget licensing costs, it costs just as much in time wasted on tinkering and learning.

The list of tweaks both registry level and powershell level just pushes you into going all Linux sometimes.

There are, yes, countless tweaks. What you could attempt is change the way the TCP layer functions to match Windows 2016, these are the commands I use:

Code:
Set-NetTCPSetting -SettingName "DatacenterCustom" -CongestionProvider DCTCP
Set-NetTCPSetting -SettingName "DatacenterCustom" -CwndRestart True
Set-NetTCPSetting -SettingName "DatacenterCustom" -ForceWS Disabled

Set-NetTCPSetting -SettingName "Datacenter" -CongestionProvider DCTCP
Set-NetTCPSetting -SettingName "Datacenter" -CwndRestart True
Set-NetTCPSetting -SettingName "Datacenter" -ForceWS Disabled

Set-NetTCPSetting -SettingName "Compat" -ForceWS Disabled

If it makes no difference, then you can roll back to default windows 2019 TCP configuration:

Code:
Set-NetTCPSetting -SettingName "DatacenterCustom" -CongestionProvider CUBIC
Set-NetTCPSetting -SettingName "DatacenterCustom" -CwndRestart False
Set-NetTCPSetting -SettingName "DatacenterCustom" -ForceWS Enabled

Set-NetTCPSetting -SettingName "Datacenter" -CongestionProvider CUBIC
Set-NetTCPSetting -SettingName "Datacenter" -CwndRestart False
Set-NetTCPSetting -SettingName "Datacenter" -ForceWS Enabled

Set-NetTCPSetting -SettingName "Compat" -ForceWS Enabled
 

RageBone

Active Member
Jul 11, 2017
617
159
43
the common cx354 fcbt*? are pcie 3.0 x8 which gives you a theoretical limit of 64Gbit/s. Bonded ports will hence not max out 80Gbit/s. Just to have that expectation of the table unless you have a different card.

In addition to SMB Multichannel, please make sure SMB Direct is enabled and working.
SMB Direct being the RDMA variant that significantly reduces cpu load and ups throughput.

My main experience here is with iscsi vs iSer on linux which was able to saturate the link where iscsi would not, due to my machines weak single thread performance.
Winblows does not support iSer, just saying.