FreeNAS 10 Beta 2: Performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Hi guys/gals. I stood up a decent FreeNAS 10 Beta 2 server for testing etc and ran some file transfer tests and not looking all that great. Realize it's a beta but want to see what you guys have been experiencing with the FreeNAS 10 Beta.

My setup consists of:
Supermicro CSE-825TQ-563LPB
Supermicro X9SRL-F
Xeon 2670 SR0K
64GB DDR3 ECC 1333
SAS2008
6 x Seagate NAS HDD ST40000VN000 4TB RAIDZ2
2 x Intel S3700 200GB SSD's Mirrored ZIL
Intel X520-DA2

10GB file transfer from PC (SSD) over 10Gb to FreeNAS Share it caps at around 160 MB/s

upload_2017-1-6_13-39-49.png

Same file read from FreeNAS back to PC caps at around 500 MB/s

upload_2017-1-6_13-41-6.png

I'm thinking these numbers are pretty low?

Any suggestions or thoughts?
 
  • Like
Reactions: Patrick

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Interesting. And disappointing.

I've torn down my FN10 test system for now. I was mostly testing functionality (esp. support for Bhyve based VMs). Too many bugs in the UI still and Bhyve networking isn't really stable yet (e.g., if you set jumbo frames on the NIC then their bridge setup fails and VMs start with no network - oops).

The test array I had was 100% SSD (fast ones at that - HP SAS3 drives) and I didn't have any trouble saturating 10Gbe.

What build are you running? I do know that they updated to Samba 4.5.3 a week or so ago - maybe there is some booboo in that? The ZFS code is unmolested from FreeBSD so the array itself probably isn't the issue.

See what kind of speeds you can get using NFS to a linux box...that should tell you real fast if its a ZFS/disk issue or something else.
 
  • Like
Reactions: _alex and Patrick

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Writing to a dual-parity ZFS, hopefully sequentially but you can never be quite sure with just windows file copy - I wouldn't expect to be super-fast. ZFS is designed for reliability and data protection first, performance is secondary, and no parity raid is great at write performance. Though I'd recommend ensuring there are no other loads on it and running proper benchmarking software (eg. IOmeter if you like windows) before coming to any real conclusions.
 
  • Like
Reactions: _alex

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
I don't like synthetic tests, I like to see how things perform real world...lol...but you're right, could be a whole slew of things outside of FreeNAS causing slow speeds.

On this setup, however, I'm seeing others get around 400MB/s writes.

There is no other load, this server is not in production, only built for FileSharing, no VM's, nothing.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
You might also try playing with the pool parameters. Make sure atime is off and check the Sync/Async setting. You have the dual-3700's for a ZIL but if you've set up for Async writes (which I think is FreeNAS default) then you're likely completely bypassing them. You have one of those configs where Sync writes might just go faster. Maybe.

I found (on a FN9) setup that a pool with really slow hard drives (2.5" seagate 4TB shucked from USB enclosures) and a really fast ZIL (Samsung SM961 4x PCIe NVMe M.2) setting "sync=always" actually made writes go much faster. I don't know if your "modestly slow" NAS drives and Fast SATA SSDs have enough of a speed gap to make this true for you - but might be worth it.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
also would guess that your writes were not hitting slog but the pool directly. what would give ca. the speed of a single hdd, what could be about what you saw.

sync=always should change this, and also with striped mirrors this should multiply with the number of mirrors/vdevs.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
NFS will be sync by default if using ESXi anyways I know.

NFS by default will implement sync writes as requested by the ESXi client. By default, FreeNAS will properly store data using sync mode for an ESXi client. That's why it is slow. You can make it faster with a SSD SLOG device. How much faster is basically a function of how fast the SLOG device is.

Shamelessly stolen from freenas forum post

He's using SMB of course so there may be a litany of other variables.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
right, if that is nfs it can't be related to snyc.
but wonder if it's maybe smb-shares, and if there sync would be forced/be the same like sync=always
 

Davewolfs

Active Member
Aug 6, 2015
339
32
28
Sounds like you need to do a comparison between OmniOS, ZOL and FreeNas :)

Whatever results you are getting should theoretically be the same as FreeBSD 10.3.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
can you verify that your writes hit the slog somehow?
guess iostat could do this, but not sure what is available on freenas.
logbias=throughput could also cause that slog is bypassed.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
can you verify that your writes hit the slog somehow?
guess iostat could do this, but not sure what is available on freenas.
logbias=throughput could also cause that slog is bypassed.
FreeNAS performance graphs should show that quite easily.
 
  • Like
Reactions: _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
have used FreeNAS for the first time 2 days ago to try nephris proxmox plugin, not found perf graphs yet.

The GUI in the beta is so terrible i went onto 9 immediately. Slow, still buggy and worst: just the wrong Type of UI for the use-case. imho those pseudo miller columns are nice for browsing whatever on a mobile, tablet or tv, but not really to manage smth technical like a storage.
 
  • Like
Reactions: PigLover

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Ok so here's where i'm at. Can't remove the zil, gives me errors every time.

Enabled Sync Writes, drops from 160MB/s to like 30MB/s.
Changed SMB to version 3: 160MB/s
Turned of Compression and atime: 160MB/s
Changed logbias to latency: 160MB/s
Changed Jumbo Frames/RSS/Transmit Receive Buffers/Queue: 180MB/s

After network changes, READS are over 1GB/s which is sick. But writes still suck, only went up 20MB/s after Network changes.

Correction, reads for one of the files I tested which was a 7GB .img file hit over 1GB/s but 10GB or 5GB AVI still hovering around 450-460MB/s
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Hm,
maybe large sequential writes can't be handled faster with a single vdev, no matter what settings.

just my thoughts about:

- the SLOG in theory could kick in, but zfs will for sure not agree to hold a 10G file in it. So, it will be bypassed and everything goes to the pool itself.
- when writing to the pool, the data will be split to all disks in the vdev, and considered written/ack'ed after each chunk has reached it's disk.
- eventually ZIL on the pool is used, what doubles the seek-times as first ZIL is written on pool, then the write is ack'ed and then ZIL from pool is flushed to disk's with each txg. Here i`m not sure, if this is really the case. But i guess for large sequential single writes the ZIL (if on SLOG or on Pool) could be harmful to performance in the end. This is maybe what logbias=throughput allows to controll with such pools/workloads.

So your performance would in the end stay with that of a single disk + compression + some positive effects that could arrise from the fact that not everything is written to the same disk but multiple.

As your data is close to not compressible, there is not a huge benefit to expect from it in this case.
What stays is (slowest) seek-time for the writes + time to actually write to the pool then.

Would be curious to see what others think.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Working on going back to latest stable 9.x version and will retest, though I don't think that's going to have an impact, but we shall see. I agree with the FreeNAS 10 interface, realize it's a Beta so will hold my assessment until the final product is released.
 

Davewolfs

Active Member
Aug 6, 2015
339
32
28
The additional comments just reminded me of something.

You should run iperf between the two machines. Make sure that your send and receive can saturate the link. Just because receive buffers are good does not mean the send ones are.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
The additional comments just reminded me of something.

You should run iperf between the two machines. Make sure that your send and receive can saturate the link. Just because receive buffers are good does not mean the send ones are.
I thought about that too...i'll do that as soon as 9.10 is up. Should be a couple minutes.