FreeNAS / SOHO Server Build - can this be done with Proxmox?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

FreeNASmike

New Member
Mar 14, 2017
2
0
1
54
Hi there
Am just about to push the order button on a new FreeNAS build.
This is my current part list:
-----
MOBO: Supermicro | Products | Motherboards | Xeon® Boards | X10SDV-12C-TLN4F
CASE: SC721TQ-250B | Mini-tower | Chassis | Products | Super Micro Computer, Inc.
RAM: Kingston 32GB DDR4 2133 Reg ECC (x2)
STORAGE DRIVES: HGST 3.5in 26.1MM 6000GB 128MB 7200RPM SAS 512E ULTRA ISE (x4)
HBA: https://www.broadcom.com/products/storage/host-bus-adapters/sas-9211-8i
-----
Please feel free to provide feedback with any of the above btw! (And the reason for the 12C Xeon was we are planning to run a lot of virtualized applications in the not too distant future)


What I need some advice with is regarding the best setup for running FreeNAS virtualized within a hypervisor such as Proxmox.
The above case/mobo has room for another two 2.5" drives so I was thinking of specifying a pair of mirrored SSDs potentially but I am a little unclear on what experienced users would recommend?

I should note that I have posted a similar request over at the FreeNAS forums but am already getting the impression that unless I am willing to use a proprietary hypervisor / host (aka VMware) then I must be crazy or have no idea what I am doing :)

Anyone have any experience getting FreeNAS running under Proxmox and/or Xen etc.?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I imagine you could, but proxmox supports ZFS. Just run the storage there. You can do PCI passthrough for a storage VM like the ESXI all-in-one design uses, but I don't see what you gain from the added complexity.

The newer freenas uses a similar Hypervisor setup to proxmox. It's a little less mature, but it should work with that hardware. It might be worth checking into it.
 
  • Like
Reactions: PigLover

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
I think @ttabbal is spot on. Proxmox actually has ZFS on Linux, Gluster, Ceph, md RAID and others that you can use. The trick is that the base level management is more CLI based, and nice features like clicking a button to install a service are a bit more complex. If you want better performance/ reliability, that is an option.

You may want to check FreeNAS 10 out with bhyve. The FreeNAS team is making a push there and I know many web hosting folks who love bhyve.

If you did want to FreeNAS and then do hypervisors, I do think that avoiding an AIO if it is not an absolute constraint is best.

On the platform, if you can handle a bit more power, here is what I would personally do for an AIO:

CPU: Intel Xeon E5-2683 V3 2.0GHz 9.6GT/s 35M 14Core CPU Processor | eBay
Motherboard: Amazon.com: Supermicro MBD-X10SRH-CF-B LGA2011/ Intel C612/ DDR4/ SATA3&SAS3&USB3.0/ V&2GbE/ ATX Server Motherboard: Computers & Accessories
RAM: Micron/ Crucial, Samsung or SK.Hynix. Kingston modules are not validated on Supermciro platforms.

ThoseMB/ CPU will be around $650. They will be bigger (ATX), and at full bore, the power consumption will be higher. Still, since you are saving $1000 on the motherboard and a few dollars more for the 9211-8i, I think it is worth it.

Power consumption will be marginally higher at idle. Under load, you will use more.

Aside from getting 14 cores, you also get 8 DIMM slots and quad channel memory. If you run a lot of VMs on a 12C Xeon D you are limited to ~10GB/ core which is not a ton. Also, the 12C and 16C models get memory bandwidth starved fairly easily. ZFS also loves RAM so adding more will benefit you.

On the storage side, with the E5 you get 10x SATA from the C612 PCH on that platform and 8x from the SAS 3008 (three generations of LSI controller newer than the 9211-8i's SAS 2008).

I know there are folks pushing 12C Xeon D for VM usage, but if you truly are going to do heavy VM usage, you want an E5 without a doubt.

You will probably save $400 and have more capacity with the E5 V3 which makes up for a lot of the difference in power consumption.
 
  • Like
Reactions: vanfawx

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Agree with @Patrick and @ttabbal. I've played with this 9-ways to Sunday. I don't think I'd try to virtualize FreeNAS under KVM. Yes - it can be done. Yes - it will work. With a bit of effort it can even be made to work well. And you'll tear your hair out several times trying to manage it, dealing with nitty and naggy idiosyncrasies.

Proxmox provides rich options for storage management. As noted above you do have manage the sharing part off the CLI but - at its roots - FreeNAS is little more than a web GUI on top of ZFS, Samba, NFV, etc - and all are basically the same on Proxmox without the GUI (it is a REALLY nice GUI though...).

Mostly I agree with Patrick when he suggests a dedicated storage box, probably FreeNAS and a separate virtualization host. This is the winning combo, it's not as much 'more expensive' as you might think, and you'll save many many headaches.

Sent from my SM-G925V using Tapatalk
 
  • Like
Reactions: Patrick

FreeNASmike

New Member
Mar 14, 2017
2
0
1
54
Thanks for the feedback and advice everyone.
I think I will just go with running FreeNAS on the bare metal and virtualise everything else within that (via Bhyve etc.).

Patrick - I am just now doing some research on some of the alternative motherboards you suggested. Given that I am under a fair bit of pressure to finalize this order, was wondering if there are any articles that you could point me towards regarding the issues you mentioned with the SOC boards (eg. "12C and 16C models get memory bandwidth starved" etc).

Also, what benefits (if any) would be gained from the newer generation LSI controller?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Go E5.

Xeon D has half the memory channels as E5 per CPU. 22 cores and the E5 gets starved for be too

3008 is sas3. Better SATA support. PCIe 3 not PCIe 2.

E5 or E5 plus cheap NAS
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
If your really running lots of VM's can't go past a 10(+) core E5 with lots of ram. 8 x 32gb rdimm is 256gb which is really nice for VM's, that or consider 2 smaller boxes with one maybe being always on and providing storage and 24x7 VM's and spin the other up when you need more capacity.