Question regarding expected performance of my FreeNAS build.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

CobaltFire

New Member
Nov 7, 2015
20
0
1
41
So I've had a FreeNAS box running for a few years now. It's a Supermicro A1SAI-2550 board, running 24GB of ECC. Boot drive is a 120GB Samsung 840 (spare laying around) and the disks are a mix of 3 2TB and 2 4TB in a RAIDZ2. It's been dead solid except when my boot USB stick died about a year ago, so I replaced that with the SSD.

I upgraded my network last week, after I built a new desktop. I'm now running a ConnectX2 in the box, running SFP+ twinax 1M to a Mikrotik CSS326. The desktop is running the same setup.

Given I was over GbE previously I never saw an issue with speeds, and only now can I see how fast it's ACTUALLY running. Right now via CIFS I'm seeing 155MB to the server and 250MB to the workstation, using 300GB folder of ripped DVDs as my test file. This stays ROCK SOLID, and doesn't vary no matter how many times I change it, so nothing weird there.

I'm actually just curious, is this a reasonable set of speeds, or should I be looking for more performance here?
 

Monoman

Active Member
Oct 16, 2013
410
160
43
I'd test the speed of your network, some protocol independent test to help rule out physical issues. There are several free tools.

I'd recommend iperf.

Report your results and we can help from there. Could be TCPIP tuning, CIFS tuning ect.

:)
 

Monoman

Active Member
Oct 16, 2013
410
160
43
re-reading your post, I'm guessing on a 4 drive raidz2 the speeds you're seeing is the max from the array. It would still be interesting to see how much more you "could" get with a faster array.

One area we've not talked about, but your workstation. what drive(s) do you have in it?
 

ttabbal

Active Member
Mar 10, 2016
746
207
43
47
With a single raidz2 vdev, that's about what you should expect for sequential. You need more vdevs if you want to go faster to the platters.
 

CobaltFire

New Member
Nov 7, 2015
20
0
1
41
Workstation is a Ryzen 5 with 16GB RAM and a Samsung 850Evo 500GB on SATA3.

If that's about what I'd expect I'm perfectly fine. Just couldn't find anything to tell me how fast TO expect.
 

CobaltFire

New Member
Nov 7, 2015
20
0
1
41
Not sure if I did this right, because the results iperf gave me are frankly baffling.

bin/iperf.exe -c 10.0.1.210 -P 1 -i 1 -p 5001 -f G -t 10
------------------------------------------------------------
Client connecting to 10.0.1.210, TCP port 5001
TCP window size: 0.00 GByte (default)
------------------------------------------------------------
[ 3] local 10.0.1.27 port 52080 connected with 10.0.1.210 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 46088 GBytes 46088 GBytes/sec
[ 3] 1.0- 2.0 sec 46792 GBytes 46792 GBytes/sec
[ 3] 2.0- 3.0 sec 47060 GBytes 47060 GBytes/sec
[ 3] 3.0- 4.0 sec 46600 GBytes 46600 GBytes/sec
[ 3] 4.0- 5.0 sec 47360 GBytes 47360 GBytes/sec
[ 3] 5.0- 6.0 sec 47296 GBytes 47296 GBytes/sec
[ 3] 6.0- 7.0 sec 47364 GBytes 47364 GBytes/sec
[ 3] 7.0- 8.0 sec 47240 GBytes 47240 GBytes/sec
[ 3] 8.0- 9.0 sec 47356 GBytes 47356 GBytes/sec
[ 3] 9.0-10.0 sec 47340 GBytes 47340 GBytes/sec
[ 3] 0.0-10.0 sec 470500 GBytes 47045 GBytes/sec
Done.

bin/iperf.exe -c 10.0.1.210 -P 1 -i 1 -p 5001 -f G -t 10 -d -L 5001
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.00 GByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.0.1.210, TCP port 5001
TCP window size: 0.00 GByte (default)
------------------------------------------------------------
[ 4] local 10.0.1.27 port 52142 connected with 10.0.1.210 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 1.0 sec 95876 GBytes 95876 GBytes/sec
[ 4] 1.0- 2.0 sec 97928 GBytes 97928 GBytes/sec
[ 4] 2.0- 3.0 sec 95644 GBytes 95644 GBytes/sec
[ 4] 3.0- 4.0 sec 93720 GBytes 93720 GBytes/sec
[ 4] 4.0- 5.0 sec 102676 GBytes 102676 GBytes/sec
[ 4] 5.0- 6.0 sec 98432 GBytes 98432 GBytes/sec
[ 4] 6.0- 7.0 sec 96208 GBytes 96208 GBytes/sec
[ 4] 7.0- 8.0 sec 99224 GBytes 99224 GBytes/sec
[ 4] 8.0- 9.0 sec 98664 GBytes 98664 GBytes/sec
[ 4] 9.0-10.0 sec 97288 GBytes 97288 GBytes/sec
[ 4] 0.0-10.0 sec 975664 GBytes 97557 GBytes/sec

Is it just me or is that reporting WAY higher bandwidth than I should even theoretically pull? I think I broke something. Maybe the laws of physics?

EDIT - Using the built-in iperf 2.0.5 on FreeNAS and jperf 2.0.2 with iperf 2.0.5 stuffed in the bin folder on my desktop, which is running WinServer2016.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
Ah ok, my versions never rounded to GBs.
What happens if you run just
iperf -c <server ip> and iperf -s (ie without all options) ?
 

CobaltFire

New Member
Nov 7, 2015
20
0
1
41
Server was running iperf -sD, I'll try the client with no options and see what I get when i get home.