Hi everyone
I just installed a Quanta LB6M into my home setup and in testing with iperf I'm seeing some wildly different speeds on different machines and I'm not really sure exactly why or where the bottleneck is. The best speeds I've gotten is around 5.5-6Gbs (which I consistently get going to one particular machine from anywhere else). I'm seeing consistency for the speeds where I run the iperf server (i.e. no matter which machines I'm sending the iperf test from the speed seems to be limited by the machine running the iperf server).
I've swapped around cables and ports to make sure it wasn't a rogue cable or port causing slowdowns, but that didn't make a difference. All of my hardware is very very similar and all of the NICs are all basically the same thing. Here's a list of the machines and NICs.
Dell R710 #1 - ~2.2Gbps (bare metal), ~20Gbps localhost
2x Xeon X5650 (6 cores each @ 2.67Ghz)
96GB ram
Intel X520-DA2 in x8 slot
FreeNAS 9.10
Dell R710 #2 - ~4Gbps (VM), ~20Gbps localhost
2x Xeon L5520 (4 cores each @ 2.27GHz)
96GB ram
Intel X520-DA2 (OEM) in x8 slot
ESXi 6
Dell R610 - ~5.5Gbps (VM), ~40Gbps localhost
2x Xeon X5650 (6 cores each @ 2.67GHz)
72GB ram
Intel X520-DA2 in x8 slot
ESXi 6
Can anyone recommend a good small live boot image to test with? I'd like to make some more "Apples to Apples" comparisons as it were ... just to take the OS and virtualized vs bare metal out of the equation.
The funniest thing is that the performance was exactly backwards to what I expected ... I assumed that the bare metal FN box would outperform the esxi servers.
The text following this is no longer relevant as I have changed slots and the OEM card was in a different machine (that's performing well) but I left it for completeness ...
I just installed a Quanta LB6M into my home setup and in testing with iperf I'm seeing some wildly different speeds on different machines and I'm not really sure exactly why or where the bottleneck is. The best speeds I've gotten is around 5.5-6Gbs (which I consistently get going to one particular machine from anywhere else). I'm seeing consistency for the speeds where I run the iperf server (i.e. no matter which machines I'm sending the iperf test from the speed seems to be limited by the machine running the iperf server).
I've swapped around cables and ports to make sure it wasn't a rogue cable or port causing slowdowns, but that didn't make a difference. All of my hardware is very very similar and all of the NICs are all basically the same thing. Here's a list of the machines and NICs.
Dell R710 #1 - ~2.2Gbps (bare metal), ~20Gbps localhost
2x Xeon X5650 (6 cores each @ 2.67Ghz)
96GB ram
Intel X520-DA2 in x8 slot
FreeNAS 9.10
Dell R710 #2 - ~4Gbps (VM), ~20Gbps localhost
2x Xeon L5520 (4 cores each @ 2.27GHz)
96GB ram
Intel X520-DA2 (OEM) in x8 slot
ESXi 6
Dell R610 - ~5.5Gbps (VM), ~40Gbps localhost
2x Xeon X5650 (6 cores each @ 2.67GHz)
72GB ram
Intel X520-DA2 in x8 slot
ESXi 6
Can anyone recommend a good small live boot image to test with? I'd like to make some more "Apples to Apples" comparisons as it were ... just to take the OS and virtualized vs bare metal out of the equation.
The funniest thing is that the performance was exactly backwards to what I expected ... I assumed that the bare metal FN box would outperform the esxi servers.
The text following this is no longer relevant as I have changed slots and the OEM card was in a different machine (that's performing well) but I left it for completeness ...
I don't think that the x4 vs x8 slots account for the difference here but maybe I'm wrong -- in theory the x4 slot should still have sufficient bandwidth (guessing the 20Gbps vs 40Gbps illustrates that difference).
Where am I bottlenecking I wonder ... could it be the X520 clone card that's the biggest slowdown? Am I expecting too much from this gear? I thought I could at least get closer to 10Gbps ...
Last edited: