After some advice on first FreeNAS build to replace current setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eds89

Member
Feb 21, 2016
64
0
6
34
Hi All,

I currently have a DIY, do-it-all, server built at home that acts as a domain controller, Hyper-V host and storage server (an absolute mess of roles, hence the need to sort out proper replacements now I have some money burning a hole in my pocket).
This machine is housed in a 24 bay Logic Case chassis, on a SuperMicro X9SRA with an Intel Xeon E5-2618L, an LSI 9260-8i with an Intel 24 port expander. I have 8 WD reds in 2 RAID 5 arrays from years ago, almost full of media and other data including VM storage.
VMs I run are PLEX server, download server and couple of test VMs.

My intention, is to split the storage out into a standalone FreeNAS machine, and transition to an ESXi host with a direct connection between the two so the ESXi host can use FreeNAS for VM storage.

Where I need some advice, is on FreeNAS (which I have never used before) and if it is the most appropriate solution for my needs, and then hardware suggestions for the standalone storage machine. I have seen suggestions that UnRAID or OpenFiler may be the preference, but FreeNAS seems to have been the most recommended for ZFS, which in itself I have seen many people touting as the way forward instead of hardware RAID.

Ultimate goals are to have a setup where VM performance is good, nothing impacts anything else (currently watching something in PLEX can impact VMs etc. as everything is all kind of sharing strorage), low power consumption for storage server, support up to 3 clients streaming media at a time (mainly 720p content or lower), less reliance on hardware for RAID so easier transition to new hardware in future, ability to easily expand RAIDs in near future with more disks.

I guess software questions should come first, as they can impact hardware:
1. FreeNAS memory suggestion is 1GB per TB. Is this per attached TB of storage, or configured usable storage? I.e. if I had 4x 2TB drives in RAID 10, should I budget for 8GB of RAM, or 4GB?
2. Can FreeNAS do standard RAID 10? As the storage machine would be used for VM storage, I would think RAID10 is the most appropriate level for this? N.B I am not hugely clued up on ZFS RAID equivalents.
3. Is it recommended in FreeNAS to put all drives into a single pool and divide into volumes, or should I be splitting into multiple RAIDs based on purpose? i.e. one RAID 10 pool for VMs, RAIDZ2 for media?
4. Would people recommend creating SMB shares directly in FreeNAS for general/media storage, or would you just use VMs and store all files within the VM (so within a single VHD on FreeNAS)? i.e for PLEX, create a VHD in ESXi stored on FreeNAS, and then save all media to a "local" drive within the Plex VM? I would lean towards simply putting this all within a VHD and attach to the VM, mainly because it means PLEX doesn't then have to go out onto the network to read the file before then sending to the client to play. It will consider it local, and the only bottleneck would be that between the storage machine and hypervisor?
5. Can FreeNAS to fiber channel target mode? It sounded as though there may be some workaround to enable it? Perhaps it is now a standard feature?
6. Based on 3, 4 and 5, is it going to be best for me to go for more of a SAN deployment, where all the storage on FreeNAS is presented to the hypervisor, and then it all gets assigned to VMs which become the user "front end" access? I wonder if I can create a volume on a RAID and present to ESXi for a VMFS datastore for VM storage, and then create another volume on the same RAID and present to ESXi as RAW SCSI and assign to an individiual VM?

I guess then hardware wise:
1. For my use case above, I am thinking Supermicro X9SCM-iiF with E3-1220L v2 (not sure if this would be a little under powered for VM storage target, and if I should be considering the E3-1265L instead?). The board supports up to 32GB RAM, has IPMI and dual onboard GBE. It also has 4 PCI-E slots which may be of use as below.
2. The chassis I have has 6 backplanes with mini sas connections. I was planning on dumping my 9260-8i, and getting something like an LSI00301 which is also 8 port but just an HBA. I could then pair this with my Intel expander to allow me to connect to all my 24 drives. Would that be the best approach, or should I be considering 3 of those HBAs and dump the expander?
3. I can probably get my hands on several QLogic 4Gb fiber HBAs, so that's why I was asking about target support, so I can have a direct fiber connection between storage and hypervisor. This means I don't have to worry so much about ethernet being a bottleneck when it comes to storage.
4. Should I have drives all the same size in FreeNAS RAIDs or can I mix capacities?

I know that is a lot of stuff to ask, and I would completely understand if people told me to go soak my head, but decent software RAID and deployment is a bit new to me, and I can't really afford to buy any test gear to give this a go. I kind of need to get it right first time so I can migrate everything over.

Any input would be greatly appreciated.

Love you all
Eds
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Eds perhaps also look into using Proxmox. Linux based. You get KVM virtual machines. You also get ZFS albeit using cli.

I'm moving from FreeNAS to Proxmox since it's Linux instead of FreeBSD. I love FreeBSD but Linux is a steamroller.

Software:
  1. If you are using DDR3, just put as much RAM as you can. 16GB+ and ZFS will thank you.
  2. Yes. ZFS and FreeNAS can.
  3. I now just stripe across mirrors. That also lets you expand using pairs.
  4. Yes. ZFS SMB shares. This is how easy it is even from Linux ZFS on Ubuntu: Create ZFS pool with NVMe L2ARC and share via SMB
  5. Perhaps just do iSCSI?
  6. Again, iSCSI?
Hardware:
  1. You can get a lot of compute cheap. e3 v2 is fine but there's a lot of cheap xeon floating around. Stay DDR3 to keep costs low.
  2. Using SATA disks, HBA direct is preferable to expanders.
  3. 10 and 40 gbe adapters are under $100. Maybe sell the Fiber HBAs and just get lots of bandwidth. Infiniband is another cheap option.
  4. If you do mirrors, mirror drives of the same size.
 

Eds89

Member
Feb 21, 2016
64
0
6
34
Eds perhaps also look into using Proxmox. Linux based. You get KVM virtual machines. You also get ZFS albeit using cli.

I'm moving from FreeNAS to Proxmox since it's Linux instead of FreeBSD. I love FreeBSD but Linux is a steamroller.

Software:
  1. If you are using DDR3, just put as much RAM as you can. 16GB+ and ZFS will thank you.
  2. Yes. ZFS and FreeNAS can.
  3. I now just stripe across mirrors. That also lets you expand using pairs.
  4. Yes. ZFS SMB shares. This is how easy it is even from Linux ZFS on Ubuntu: Create ZFS pool with NVMe L2ARC and share via SMB
  5. Perhaps just do iSCSI?
  6. Again, iSCSI?
Hardware:
  1. You can get a lot of compute cheap. e3 v2 is fine but there's a lot of cheap xeon floating around. Stay DDR3 to keep costs low.
  2. Using SATA disks, HBA direct is preferable to expanders.
  3. 10 and 40 gbe adapters are under $100. Maybe sell the Fiber HBAs and just get lots of bandwidth. Infiniband is another cheap option.
  4. If you do mirrors, mirror drives of the same size.
Thanks for the feedback.

Is proxmox more focused on the virtualisation side of things, with storage being a secondary concern? I don't mind CLI but given my unfamiliarity with ZFS and software based RAID solutions, a GUI would probably make my transition to it a little easier.

If I were to go iSCSI, and have multiple VMs running off the storage server, I am guessing I would need to think about iSCSI MPIO or LACP? I had seen some people sugest iSCSI over a LACP group is not so good, and that MPIO would be the better way to do it? That way a 4 port NIC could give me the same 4Gb bandwidth as the fiber would. Limitation for 10GbE for me, is my home network is built on a 1GbE switch, so 10GbE would also require investment in a new switch, or do a device to device 10GbE link, meaning less flexibility for adding another host in the future.

Thanks
Eds
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I wouldnt bother with SAN any more nowadays. Get a dual port MLX card for the storage box then you can direct - attach a secondary host in the future if you need it.
 

Eds89

Member
Feb 21, 2016
64
0
6
34
I wouldnt bother with SAN any more nowadays. Get a dual port MLX card for the storage box then you can direct - attach a secondary host in the future if you need it.
I assume by this you mean a 10GbE card and have a direct connection between the storage box and hypervisor, and utilise iSCSI?

Cheers
Eds
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
10g or 40g, various rather cheap ones availble:) Check OS support o/c.
And iScsi or nfs depending on what kind of storage you want to present. Note that nfs will require a slog device to work with acceptable speeds for ESX/Freenas
 
  • Like
Reactions: T_Minus