Another Super Low 10Gb performance thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Visseroth

Member
Jan 23, 2016
75
1
8
43
Thanks for the reply. So far I've tried some tuning, nothing seems to be helping. I tried with -w 512k -P 2 with the following results...

Code:
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   737 MBytes   618 Mbits/sec
[  3]  0.0-10.0 sec   620 MBytes   520 Mbits/sec
[SUM]  0.0-10.0 sec  1.33 GBytes  1.14 Gbits/sec
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
Thanks for the reply. So far I've tried some tuning, nothing seems to be helping. I tried with -w 512k -P 2 with the following results...
I've never needed to tune either the operating system (FreeBSD) or the benchmark:
Code:
(0:1) pool1:/sysprog/terry# iperf -c pool4
------------------------------------------------------------
Client connecting to pool4, TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[  3] local 10.20.30.111 port 18847 connected with 10.20.30.114 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-10.0 sec  11.4 GBytes  9.83 Gbits/sec
The target is an E5520, nothing special. Cards on both ends are Intel X540-T1, connected via a Dell 8024F switch.

Try eliminating the network card from the test to see what your raw performance is like. With iperf -B 127.0.0.1 -s for the server side, running the client from another terminal window, I get:
Code:
(0:1) pool4:/sysprog/terry# iperf -c localhost
------------------------------------------------------------
Client connecting to localhost, TCP port 5001
TCP window size: 47.8 KByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 64830 connected with 127.0.0.1 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-10.0 sec  32.6 GBytes  28.0 Gbits/sec
 
  • Like
Reactions: Quasduco

Visseroth

Member
Jan 23, 2016
75
1
8
43
Nice, well good to know. Definitely has something to do with either one or both of the cards. I'm almost half tempted to put another fiber module in one of the cards, set the IP static and loop back on the card and run a test. I guess I'll put that on my list so I can test directly through the card.

But I did the test to the localhost and I've run a test on my pool, just out of curiosity. The pool is capable of 29.4Gb/s, the iperf localhost test came out even faster...

Code:
------------------------------------------------------------
Client connecting to localhost, TCP port 5001
TCP window size:  271 KByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 26851 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  42.6 GBytes  36.6 Gbits/sec
Code:
iperf -c localhost -w 512k -P 2
------------------------------------------------------------
Client connecting to localhost, TCP port 5001
TCP window size:  526 KByte (WARNING: requested  512 KByte)
------------------------------------------------------------
[  3] local 127.0.0.1 port 49454 connected with 127.0.0.1 port 5001
[  4] local 127.0.0.1 port 18369 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  20.4 GBytes  17.5 Gbits/sec
[  3]  0.0-10.0 sec  20.4 GBytes  17.5 Gbits/sec
[SUM]  0.0-10.0 sec  40.8 GBytes  35.0 Gbits/sec
 

Visseroth

Member
Jan 23, 2016
75
1
8
43
My older system gave me different results...

Code:
------------------------------------------------------------
Client connecting to 10.10.10.252, TCP port 5001
TCP window size:  271 KByte (default)
------------------------------------------------------------
[  3] local 10.10.10.252 port 20307 connected with 10.10.10.252 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  8.39 GBytes  7.20 Gbits/sec
Definitely the weakest link
 

Visseroth

Member
Jan 23, 2016
75
1
8
43
Well after additional changes I've achieved 3Gb a sec from my VMWare server to my FreeNAS box. Seems my old server was the weak link and my VMWare server hasn't been tuned for 10Gb stuff. So I'll keep playing around with it to try and get 8 to 10Gb/s though I doubt I'll use it much but it sure will be handy when I'm not the only one transferring files.
 

Visseroth

Member
Jan 23, 2016
75
1
8
43
I finally had a chance to load another OS. I used a live CD that I loaded up from a flash drive and attached are the results.

The first few tests were direct connects. Old server to the new server. The second 60 second test was through the Quanta switch and in the last test I pulled the Finisar modules and used the Chelsio modules.

I've also been posting on the FreeNAS forums to see what they think. So far no one has any answers, we're all still speculating and testing.
Here is the link for that thread...
10Gb NICs and iperf is showing 1Gb speeds | Page 3 | FreeNAS Community
 

Attachments