Only seeing 900Mbps or less speed on 10Gb network

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

artspret

New Member
Oct 13, 2020
5
0
1
Memphis, TN
I've been trying to solve this problem for quite awhile. I cannot seem to get above 1Gbe speeds with my 10Gb equipment. I'll summarize my equipment below.

D-Link DGS-1510-28X switches. Firmware 1.70.012 downstairs. Connects to upstairs switch from port 28 to port 28.
D-Link DGS-1510-28X switches. Firmware 1.70.012 upstairs.
Mellanox ConnectX-2 MNPA19-XTR (Firmware 2.9.1000) on port 26 in a Dell T20 with 5.50.14740 driver. Window Server 2016 1607 Build 14393.3930
Mellanox ConnectX-2 MNPA19-XTR (Firmware 2.9.1000) on port 25 in a Threadripper 2950X with 5.50.14740 driver. Windows 10 Pro 20H2 Build 19042.546
Using SFP+ modules from FS. SFP-10GSR-85 DL into the switch.

Done what a lot of forums have suggested.
Jumbo frames set to 9000 on NICs and switch port.
Interrupt Moderation disabled.
Send Buffer 4096.
Performance set to single port.

I've used 1500, 9014 and 9216. Nothing seems to work. Every indication on both PC and switch indicate 10Gb. Just no performance.
 
Last edited:

artspret

New Member
Oct 13, 2020
5
0
1
Memphis, TN
Something is not right and I'm tire of spending hours and hours trying to get it to be anywhere near what it should be. I don't know if there could be a problem with the OM4 fiber. Did I crack it or something?
Iperf3 was developed for testing scenarios with single streams in mind and not optimized or officially supported on windows: iperf3 FAQ — iperf3 3.9 documentation

In a windows only environment/test case I would recommend ntttcp: TechNet NTttcp Utility: Profile and Measure Windows Networking Performance

Eitherway, something is amiss. SFP+ modules, OM3/4 fiber, switch settings....
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
A suggestion: Make your setup (temporarily) simpler, by removing variables (such as the switches) it should make it easier to troubleshoot.
You could start by connecting the computers directly to each other.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Something is not right and I'm tire of spending hours and hours trying to get it to be anywhere near what it should be. I don't know if there could be a problem with the OM4 fiber. Did I crack it or something?
Eitherway, something is amiss. SFP+ modules, OM3/4 fiber, switch settings....
Why is something not right?
What are you trying to do? What Do you expect? What do you get?
Becuase at the moment my best guess is a layer 8 problem :D
 

artspret

New Member
Oct 13, 2020
5
0
1
Memphis, TN
Slow speed
Why is something not right?
What are you trying to do? What Do you expect? What do you get?
Becuase at the moment my best guess is a layer 8 problem :D
Any way to fix this? Any advice? When everything says you're connected at 10Gbps but only see 1Gbps or less, something is wrong.
 

artspret

New Member
Oct 13, 2020
5
0
1
Memphis, TN
A suggestion: Make your setup (temporarily) simpler, by removing variables (such as the switches) it should make it easier to troubleshoot.
You could start by connecting the computers directly to each other.
I'll try that. At one point I did have it set up peer to peer between windows machines well before I got this switch.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Slow speed
Any way to fix this? Any advice? When everything says you're connected at 10Gbps but only see 1Gbps or less, something is wrong.
I have a 100GBit/s link at home. Copying from server ones hdd array to server twos hdd array is around 850MByte/s. My limiting factor is not the network but the storage. Ntttcp has no problem to max the link out.
 
  • Like
Reactions: Derwood

Derwood

Member
May 22, 2019
52
1
8
I was wondering too and it's in direct relation to what i386 mentioned, what is the bus limitation of the data on both sending and receiving hardware.

I'm venturing towards a dual NVMe on a x8 PCIe bus as cache for the FreeNAS server because the network infrastructure supports upto 40Gbe (4x10Gbe SFP+) and my first worry of install was my storage array topend speed being able to keep up with the networking fabric, so I opted or am opting for a 2x1TB x8 bussed PCIe NVMe solution.

Just food for thought as I doubt I could fix anything else for you buddy I also assume you are moving various files with a variable file size, have you tried moving large singular files too.. an 4Gb .iso or something or another.

*It's like driving on the motorway but using a 50cc moped, you get me?
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
...because the network infrastructure supports upto 40Gbe (4x10Gbe SFP+) and my first worry of install was my storage array topend speed being able to keep up with the networking fabric, so I opted or am opting for a 2x1TB x8 bussed PCIe NVMe solution.
4x10GbE is not the same as 1x40 GbE unless you run multiple processes (i.e. total bandwith is identical)
(sidenote - well technically it is since 40G is only 4x10G lines aggregated at HW level but in the end you can get higher single process bandwith on 40G than on 4x10G joined at the switch)

So its all use case dependent.

@ OP - start with a local transfer on iperf (localhost to localhost) on both boxes to identify issues like insufficient pcie bandwith, misconfigurations, tcp window scaling issues etc
Then you could run two nics in a single box, same test to ensure both nics can perform without switches (ideally on both boxes to see if they still keep up)
If you can (cable wise) add a single switch next and then the second one.

Baby steps until you locate the issue...
 
  • Like
Reactions: Derwood

Derwood

Member
May 22, 2019
52
1
8
4x10GbE is not the same as 1x40 GbE unless you run multiple processes (i.e. total bandwith is identical)
(sidenote - well technically it is since 40G is only 4x10G lines aggregated at HW level but in the end you can get higher single process bandwith on 40G than on 4x10G joined at the switch)

So its all use case dependent.

@ OP - start with a local transfer on iperf (localhost to localhost) on both boxes to identify issues like insufficient pcie bandwith, misconfigurations, tcp window scaling issues etc
Then you could run two nics in a single box, same test to ensure both nics can perform without switches (ideally on both boxes to see if they still keep up)
If you can (cable wise) add a single switch next and then the second one.

Baby steps until you locate the issue...
Yeah, I'm happy with my apples at present :)