FreeNAS 10 Beta 2: Performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
On this setup, however, I'm seeing others get around 400MB/s writes.
I wonder if these have 6x spinners in a single Raid-Z2 vdev and then get those 400MB/s writes for a single large file-copy over SMB.

I agree with the FreeNAS 10 interface, realize it's a Beta so will hold my assessment until the final product is released.
Hm, i think performance / bug`s will become better.
What i miss is something like an overview that shows the big picture what is configured on the box.
i.e. a simple list with all active shares, their type, space and usage + an edit/details button on the end (or expandable rows) would help a lot.
Just too many clicks to get somewhere, and many important things are nicely hidden behind those columns.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Ran through the network tuning and still the same.

I also check out ZIL usage and I do see the IO hit the ZIL when I have sync=always but it drops to 35MB/s? That makes zero sense too me.

You can see in the graph (I only inluded one spindle to show what i'm talking about da5 but the I/O looked the same on all spindles during this write) and the two Mirrored SSD's for ZIL, da6 and da7

You can clearly see when sync is enabled that the I/O hits the ZIL and drops on the spindles, but drops the actual throughput to 35MB/s from 160MB/s when sync=always.

I turn sync=standard or disabled and you can clearly see that ZIL is not used and the I/O increases on the spindles. In the middle of the transfer I enabled sync=always then disabled after a short bit.

upload_2017-1-7_11-34-36.png

So in conclusion, I don't believe it's SMB protocol or network that's the issue at all.

_alex, to answer your question, the testing i've seen done and output were with the same 6 x 4TB RAIDZ2 VDEV with no ZIL and they were getting 400MB/s Writes and over 1GB/s Reads.
 
Last edited:

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Looking at those graphs, I would say your writes are never hitting the SSDs, at least the vast majority of them aren't - look at the scales. Even when the SSDs are getting used, their each doing ~150 KB/s (and mirrored, thats ~150 KB/s to the ZIL/SLOG/whatever-it-is - I'm not a ZFS guy). But you're doing ~40MB/s to the spindle (discounting parity writes, thats 4x40 = 160MB/s of real data being wriitten to the array). I would say it looks like nothing more than metadata is hitting those SSDs.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
I really wonder how it would work to get 400MB/s write to a single vdev without SLOG.
This for my understanding might work when these are async writes and sync is not set to always but standard or turned off.

I have a very similar box, 2U Intel, 12x LFF equipped with 6x 2TB WD2000FYYZ, dual 2670 + 128GB RAM that is currently waiting to be configured for DR/Offsite-Backup with ZFS. Cann bring up this machine before i complete the setup with FreeNAS and see what i get to a linux-box with fio. Could also add a 100GB S3700 as SLOG.

My problem is i don't have a single box that runs windows on bare-metal, and afaik also no Win-VM with 10G-Network (i use them mainly for office-filetype madness + see how crappy IE is ... ) - so SMB on Windows is something i could not test...
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Revisiting this, yes...I see that it was KB not MB, which I orginally assumed.

I still have the setup. I've tried all recommendations in this thread.

Just a quick question, how can I tell if all remaining memory is being used for ARC? Is that automated or do I have to configure that?

Any other recommendations? I'm actually thinking of moving to unraid and buying two large Enterprise Cache drives. People seem to be having good luck and you can expand by a single drive.

Wondering if I should do two raidz1 4 disk vdevs instead.
 
Last edited:

Kev

Active Member
Feb 16, 2015
461
111
43
41
Anyone have some updates? I'm running xeon-d 1540, EXSI 6, 6 cores for FN10 and 7 x 7200 HGST 4TB drives in Raid-z2 and I am not able to get much more than 140MB/sec to and from a windows 10 machine on same vSwitch. Actually, this 140MB/sec goes to the small 256MB SSD I have inside for testing and not even my storage array. Sometimes, I see it start off at 300MB/sec and it will go for a few seconds before it slowly declines to 140MB/sec. Some network TCP auto-tuning happening?