FreeNAS server...will this hardware suffice? Multiple zpools?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
So I'm considering a FreeNAS build for shared storage to present to VMware hosts. I'd be re-purposing the VSAN drives I'm currently using in my hosts which are as follows:

4 x Hitachi 400GB HUSSL SAS SSD's
4 x Intel S3500 800GB SATA SSD's.

For MoBo/RAM I've got a SuperMicro X10SDV-2C-7TP4F and 16GB of DDR4 2133 Registered RAM.


I have zero experience with FreeNAS/ZFS so I'm before I even get into the nitty gritty research on this I'm just looking for some expert opinions on whether or not my drives will work well as a shared storage array to present to VMware. I'm assuming my Mobo/CPU/RAM combo will suffice. I'd really prefer not to spend any more money on additional drives at this time.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
If you're only using that # drives why not do an 'all in one' instead of a separate system?

I'm also not too sure how performance-minded an all SSD setup would be with only 2C too. That mobo/setup is awesome though :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
If you're only using that # drives why not do an 'all in one' instead of a separate system?

I'm also not too sure how performance-minded an all SSD setup would be with only 2C too. That mobo/setup is awesome though :)
Define what you mean by an AIO system in this regard?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
So I'm considering a FreeNAS build for shared storage to present to VMware hosts. I'd be re-purposing the VSAN drives I'm currently using in my hosts which are as follows:

4 x Hitachi 400GB HUSSL SAS SSD's
4 x Intel S3500 800GB SATA SSD's.

For MoBo/RAM I've got a SuperMicro X10SDV-2C-7TP4F and 16GB of DDR4 2133 Registered RAM.


I have zero experience with FreeNAS/ZFS so I'm before I even get into the nitty gritty research on this I'm just looking for some expert opinions on whether or not my drives will work well as a shared storage array to present to VMware. I'm assuming my Mobo/CPU/RAM combo will suffice. I'd really prefer not to spend any more money on additional drives at this time.
Looks like a fine system to me for dedicated FreeNAS/ZFS duties. If you want a lil' more bang for buck as @T_Minus touched on, go AIO route (and throw in more memory) :-D

Standalone filer though 16 gb memory is fine for FreeNAS, my AIO's have 2 vcpu, 12 GB memory, and vmxnet3's (backed by phys 10G nics/switches of course).

EDIT: Also w/ all ssd disks I'd do a capacity pool of the s3500's (raidz/back that sh|t up, else get two more s3500 800gb'ers and go raidz2) and a performance pool of the 4 400GB hussl's, probably r10/stripped mirror config but arguments could be made for other disk layouts.

You interested in spinners (for super capacity pool w/ ssd accel) at all or is this all VM ONLY storage?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Looks like a fne system to me for dedicated FreeNAS/ZFS duties. If you want a lil' more bang for buck as @T_Minus touched on, go AIO route (and throw in more memory) :-D

Standalone filer though 16 gb memory is fine for FreeNAS, my AIO's have 2 vcpu, 12 GB memory, and vmxnet3's (backed by phys 10G nics/switches of course).

EDIT: Also w/ all ssd disks I'd do a capacity pool of the s3500's (raidz/back that sh|t up, else get two more s3500 800gb'ers and go raidz2) and a performance pool of the 4 400GB hussl's, probably r10/stripped mirror config but arguments could be made for other disk layouts.

You interested in spinners (for super capacity pool w/ ssd accel) at all or is this all VM ONLY storage?
@whitey @T_Minus I think I'm in agreement with you both on how the server is going to be setup (in a VM on ESXi) I was just thrown by the AIO verbiage because I'm not going to have any of my bulk (spinner) disks in this server. This server will be for VM shared storage only so the only VM that will run on this ESXi host is FreeNAS (or Napp-it if I went that route).

With regard to the allocation of the Hitachi HUSSL's, would I not want to dedicate any of them as SLOGs?
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Yeah those 400GB hussl dev's are too big/waste as a SLOG, a 100GB OTOH would work (I have a bunch of those for that duty). If they are only for VM stg call s3500's tier 2 stg, and hussl's tier 1 stg :-D and pick your stg protocol preference...I will warm you though a zvol iSCSI share does seem to perform better than a NFS conn from FreeNAS to vSphere it seems, not by any sort of drastic/showstopper difference in my book v.s. the ease of use w/ NFS along w/ a wealth of other benefits.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Yeah those 400GB hussl dev's are too big/waste as a SLOG, a 100GB OTOH would work (I have a bunch of those for that duty). If they are only for VM stg call s3500's tier 2 stg, and hussl's tier 1 stg :-D and pick your stg protocol preference...I will warm you though a zvol iSCSI share does seem to perform better than a NFS conn from FreeNAS to vSphere it seems, not by any sort of drastic/showstopper difference in my book v.s. the ease of use w/ NFS along w/ a wealth of other benefits.
Interesting. I do prefer to use NFS because of the ease of use as you alluded to but I also like to squeeze every ounce of performance I can out of my hardware...so tough call. How have you measured your storage throughput over both protocols btw?

Also, good tip on using smaller SSD's as a SLOG. Are you mirroring your SLOG or just using one in your server(s)? And do you think the D-1508 will suffice in not being my bottleneck?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I have rough/high level numbers simply watching zpool iostat, cacti graphs SNMP off 10G ex3300 switch, and ESXi esxtop numbers in 'real-world' use case scenario's and I can in good conscience say that at least on FreeNAS you will 'usually' see a 15-20% performance/throughput bumpup when using iSCSI v.s. NFS in my experience...least FreeNAS to vSphere that's what I have noticed even over same ethernet fabric/infra (dedicated stg vlan of course) and using SAME exact damned zpool/config...side by side...blows my mind a lil'

EDIT:non-mirrored hussl4010 BTW, ZFS can suffer slog loss and not lose data (just loses performance on a spinner pool at least, slog on all ssd pool is kinda a waste IMHO/2cents) so I personally and not too concerned abt that, especially in a home lab setting...much as I DO push my lab infra...in PRD I guess you could argue for mirrored slog :-D
 

azev

Well-Known Member
Jan 18, 2013
768
251
63
For my lab setup i setup aio with ssd pool in stripe raid 0 basically. I backup all my vm using veeam if I ever need to restore them.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
For my lab setup i setup aio with ssd pool in stripe raid 0 basically. I backup all my vm using veeam if I ever need to restore them.
I backup with Veeam as well but if I'm going to store all my VM's on a single node (thus introducing a SPOF) I will be going with a RAID10 config to give me some redundancy. Not so much for recovery as it is uptime.
 

azev

Well-Known Member
Jan 18, 2013
768
251
63
For my lab use case performance and total pool size takes much higher priorities than uptime.
There are some important VM such as AD/etc that ran the whole house, but since I have 2 sites ( I put a server at my brother basement and link our house via VPN), even if my main vm storage crashed, basic amenities for the home still up (internet access, wi-fi, etc).
So far in about 6mo or so it's been stable.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
For my lab use case performance and total pool size takes much higher priorities than uptime.
There are some important VM such as AD/etc that ran the whole house, but since I have 2 sites ( I put a server at my brother basement and link our house via VPN), even if my main vm storage crashed, basic amenities for the home still up (internet access, wi-fi, etc).
So far in about 6mo or so it's been stable.
I don't really use the term "homelab" for a reason because while yes I do a lot of what I do for learning, my home network runs a lot of services that I don't want to be down. Specifically my Plex server is shared among many family members/very close friends who've come to rely on it.

For just VM's the space I'll get out of running in RAID10 will more than suffice. All my bulk storage is stored on other servers.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Why are you moving those drives out of vSan for VM Storage?
I'm learning that vSAN really isn't the way to go unless all your hardware is on the HCL (my controllers aren't). Also, I find myself tinkering/doing maintenance on my hosts often and it causes issues if you take a node offline for more than 30 minutes. And lastly, you need at least 3 hosts (preferably 4) to get all the benefits of having a vSAN cluster which is great until you realize you only have 4 hosts to satisfy that need and not because you are actually making use of it.

I feel that a 2-3 node HA cluster all connecting to a shared storage server will fit my needs better.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
What was the effect?
I am loooking to build a vsan env and am not too happy with it yet - o/c I am not on HCL completly either; I am having issues with software and performance where its just not what I'd expect given the hw I put in;)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
What was the effect?
I am loooking to build a vsan env and am not too happy with it yet - o/c I am not on HCL completly either; I am having issues with software and performance where its just not what I'd expect given the hw I put in;)
The performance is not what I had hoped for either and it's just not as flexible as I want for my home network. Not saying I don't like the product, as I'm actually planning to use it at work this upcoming year but I'll be using VSAN ready nodes and I obviously won't be tinkering.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
@whitey What are the pros/cons of running FreeNAS baremetal vs in a VM on ESXi other than the obviously ability to share the hardware among other VMs?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
It used to be *not* recommended to run in a VM, but nowadays its kind of not a big deal anymore (it always ran fine if you adhered to the basic principle of passing on a HBA instead of doing stuff like RDM)
I have found no real con to be honest, but maybe @whitey has;)