FreeNAS server...will this hardware suffice? Multiple zpools?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
It used to be *not* recommended to run in a VM, but nowadays its kind of not a big deal anymore (it always ran fine if you adhered to the basic principle of passing on a HBA instead of doing stuff like RDM)
I have found no real con to be honest, but maybe @whitey has;)
Yea I'll definitely be passing through a controller, that was never in question for me. I've used RDM's in the past and I just don't want to go there anymore.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well lets say over in the FN forum there were some folks who wanted it on the cheap (old consumer board repurposed style) and then had a bit of bad luck with their data ...
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Well lets say over in the FN forum there were some folks who wanted it on the cheap (old consumer board repurposed style) and then had a bit of bad luck with their data ...
Yea, that's just asking for it. No wonder Cyberjock is the way he is :eek:.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well he is who he is and yes, part of it comes from ppl not reading/listening/searching, but part of it is just himself ;)
All in all he is a good guy and always happy to help if you're willing to invest some effort yourself. Just not the diplomatic type;)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
When I think about it, I'll probably end up just running this server baremetal since I don't want to use any local storage in order to put VMs on it. What's the recommended storage device(s) to install FreeNAS on for bare metal?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Hm not sure what the current status is as I am on vm, but it used to be USB2, 16gb iirc
If you are not reassigning it then the system dataset will be on it as well but you should not need USB 3 speeds on that.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
OK, I think I've decided to go the FreeNAS route. The strong community support is a big sell for me as I like to research the f**k out of things and thus the more material/users there is for me to research/bounce ideas off of the better.

I'm going to go the iSCSI route for presenting volumes to my vSphere cluster. Howerver, since this is my first foray into ZFS, I have some basic zpool setup questions based on my available disks.

I think it makes sense to setup single zpool of 2 vdevs. One vdev would be the 4 Hitachi HUSSL's in a RAID10 config and the other vdev would be the 4 Intel S3500's. However, once this configured I'm a little lost on where to go next to maximize my performance. Can one create volumes specific to vdevs or do they just reside on the zpool with no specificity as to what disks they reside upon. Ideally I'd like to have my higher IO VM's reside on the Hitachi's if possible.

Pardon my noobness with regard to ZFS.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
You can certainly create zvol's on either of the r10 pools of hitachi/intel drives that you create. These zvol's would then be configured for iSCSI setup. Please see this super handy article (unless you have already stumbled across it). It's AIO focused and I think I saw that you had decided to go phys/baremetal so not all of the article will apply (sorry have not had time to reply yet w/ pros/cons but will today).

The iSCSI section is good to reference for what you are trying to do but the whole article has a few gold nuggets hiding in it.

FreeNAS 9.10 on VMware ESXi 6.0 Guide | b3n.org

EDIT: Another article you may have already seen/read but just in case.

Yes, You Can Virtualize FreeNAS - FreeNAS - Open Source Storage Operating System
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
PHYS:
Pro's - Dedicated HW (in theory more stable base/less to go wrong/reduced complexity)
Con's - Locked into only doing one thing w/ that piece o' HW, less flexible WRT multiple-use's for that HW, phys HW is what it is, can only expand to what system supports

AIO:
Pro's - Flexibility, agility, multi-purpose nature of 'doing more w/ less', 10G networking included via vmxnet3 if doing a single AIO storage config (in memory network transfers, super fast), can easily add resources (CPU/memory/network) w/ a simple shutdown/reboot, no waiting for HW to arrive
Con's - Risk of PSOD from other VM's crashing ESXi hypervisor hosts and taking down your AIO (very low risk but a risk nonetheless), system maint can be a bit of a shuffle/balancing act if all your VM's are on your AIO but you need to perform maint (have to sVMotion all VM's/I/O to other pool/stg/system)

I'm sure I am missing some but that is high level.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
You can certainly create zvol's on either of the r10 pools of hitachi/intel drives that you create. These zvol's would then be configured for iSCSI setup. Please see this super handy article (unless you have already stumbled across it). It's AIO focused and I think I saw that you had decided to go phys/baremetal so not all of the article will apply (sorry have not had time to reply yet w/ pros/cons but will today).

The iSCSI section is good to reference for what you are trying to do but the whole article has a few gold nuggets hiding in it.

FreeNAS 9.10 on VMware ESXi 6.0 Guide | b3n.org

EDIT: Another article you may have already seen/read but just in case.

Yes, You Can Virtualize FreeNAS - FreeNAS - Open Source Storage Operating System
I'm def. going to give these a read tonight/this weekend. Thanks!


PHYS:
Pro's - Dedicated HW (in theory more stable base/less to go wrong/reduced complexity)
Con's - Locked into only doing one thing w/ that piece o' HW, less flexible WRT multiple-use's for that HW, phys HW is what it is, can only expand to what system supports

AIO:
Pro's - Flexibility, agility, multi-purpose nature of 'doing more w/ less', 10G networking included via vmxnet3 if doing a single AIO storage config (in memory network transfers, super fast), can easily add resources (CPU/memory/network) w/ a simple shutdown/reboot, no waiting for HW to arrive
Con's - Risk of PSOD from other VM's crashing ESXi hypervisor hosts and taking down your AIO (very low risk but a risk nonetheless), system maint can be a bit of a shuffle/balancing act if all your VM's are on your AIO but you need to perform maint (have to sVMotion all VM's/I/O to other pool/stg/system)

I'm sure I am missing some but that is high level.
Thanks again. I always love the flexibility of running OS's inside VMs on ESXi...BUT...in this particular case since I'm not doing AIO and I don't see myself ever running additional VMs along side FreeNAS on this hardware (Xeon D-1508 is only 2C/4T, and 16GB of RAM) I'm learning towards just installing FreeNAS on mirrored USB drives.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Mirrored usb will work but honestly I'd just slap a single 64GB sataDOM in there and be done w/ it...guess how many times I've had a failed FreeNAS boot device...ZERO...lol maybe I am just lucky or the few dozen ZFS based systems I have built over the last decade is too small of a sample set to mean a damned thing :-D

Tell ya what don't work so swell, running ESXi on a usb stick and then formatting out the rest of the capacity of the USB disk (lil' trick to do this) and THEN putting FreeNAS AIO vdisks/VM on there is a bad idea, we killed a USB stick that way on one config until we learned our lesson, FreeNAS UI went belly up a few times before we decided that was a bad idea, never any data loss on ZFS volumes though, just couldn't manage/DO anything until a reboot of AIO box.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Mirrored usb will work but honestly I'd just slap a single 64GB sataDOM in there and be done w/ it...guess how many times I've had a failed FreeNAS boot device...ZERO...lol maybe I am just lucky or the few dozen ZFS based systems I have built over the last decade is too small of a sample set to mean a damned thing :-D

Tell ya what don't work so swell, running ESXi on a usb stick and then formatting out the rest of the capacity of the USB disk (lil' trick to do this) and THEN putting FreeNAS AIO vdisks/VM on there is a bad idea, we killed a USB stick that way on one config until we learned our lesson, FreeNAS UI went belly up a few times before we decided that was a bad idea, never any data loss on ZFS volumes though, just couldn't manage/DO anything until a reboot of AIO box.
I'd go sataDOM if I had one but I don't and couldn't get one by this weekend. I do however have multiple 16GB Kingston USB flash drives at my disposal.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Alright, I've got FreeNAS installed now and I'm ready to start configuring things. I'm still debating the pros/cons of using 1 zpool of 2 RAID10 vdevs or 2 zpools.

I realize that pooling all these drives into one zpool of 2 RAID10 vdevs will make management easier. I imagine the performance will be better as well since the data will be striped across more disks.

However, since these 2 sets of disks have drastically different endurance ratings, I'm also interested in separating them to a degree. This way I could put my higher I/O workloads onto the Hitachis with their 38PB TBW and my lower workloads on the Intel S3500s. Server has

For you more experienced ZFS users out there, tell me why I should use a single zpool over 2 in this scenario?

*NOTE* I have dual 10Gb connections and dual 1Gb connections in this server so network will not be a bottleneck.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If you mix devices with different speed in a single pool the pool will be based on the speed of the slowest device - not sure of your two drive types are far apart
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
DON'T DO IT!!! :-D

Two pools is my vote for sure w/ devices w/ such drastic performance/endurance characteristics.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
If you mix devices with different speed in a single pool the pool will be based on the speed of the slowest device - not sure of your two drive types are far apart
Read/Write speeds of the S3500 and the HUSSL's really aren't all that different. But their endurance ratings sure are (450TBW for the S3500 vs 38PBW for the Hitachi).
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
That is one option I'm exploring, but I am also exploring doing the exact same thing but with FreeNAS instead of Napp-It. I want to explore both.
I opted for FreeNAS + ESXi combo and have been quite pleased. This particular box is more for sandbox testing so I don't break the bare metal setups I have which are ESXi and FreeNAS. I've been using 4x Intel S3500's (striped mirrors) in the bare metal FreeNAS box for SAN storage, exported via NFS over 10GbE for over a year now and it's flawless. For long term data storage or data that is deemed critically important (docs, pics, home videos, local backups and those of friends and family and especially the wife's docs, pics, etc) I highly recommend ZFS.