Home server ESXi + FreeNAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jonaspaulo

Member
Jan 26, 2015
44
2
8
37
Hi all,

I am new here so first of all thanks a lot for the site and foruns which are a helpful resource.
I am planning on building an ESXi based server for home use and hoping to get some tips.

The goal to my ESXi home server is the following:

1- Running a FreeNAS VM with 6x 4 or 6TB WD RED disks running ZFS. Also inside this FreeNAS instance I will be running Plex for media streaming
2- About 10 Virtualized routers/ other SDN technologies (each taking up to 2 GB RAM)
3- Multiple Test VMs for windows server, redhat, kali and other security distributions

1 would be always running, and 2 and 3 on the need basis.

To tackle this setup without spending a big pile of money I am pending to two solutions (BOARD+CPU):

One less scalable but cheaper -
Asrock Server Mainboard C2750D4I mini ITX 8-Core Intel Avoton 12xSATA bis 64GB - 430 Euros only board + integrated cpu


The other more scalable but more expensive - Some supermicro board with integrated LSI controller (which I would pass through HBA to the FreeNAS) like this one SUPERMICRO MBD-X10SL7-F-O, but for socket FCLGA2011,
since I was thinking in going to an E5-2620v2 (not v3 due to DDR4 being more expensive, but has intel vt-d, essential to the FreeNAS VM)


Regarding RAM I was thinking in at least 32GB RAM ECC supermicro or asrock advised for each board, but ideally 64GB if I have the money at the time to go for that.
So even if I go with the 32GB route, they should be 2x16GB sticks for future upgradability.


Now based on the requirements and the proposed solutions, do you think one of this is the best option ? Or are there any others within the same price range that would fit me better?
Also I am going to be limited CPU wise anyhow? Does it make sense to get a Dual Processor board for future use even if I just buy one CPU for now?

Thanks a lot
 
  • Like
Reactions: NeverDie

RTM

Well-Known Member
Jan 26, 2014
956
359
63
You should be aware that while the c2750 supports 64GB of memory it does not support registered DIMMs, so you need some special (read: hard to find & probably expensive) 16GB unbuffered DIMMs, if you want 16 GB DIMMs.

Also it sounds to me like 32GB is just barely enough, at least assuming your numbers for the SDN vms is correct (an average might be better if you want to estimate).
ZFS (ZFS likes memory), 10x2GB for SDN VMs and whatever the remaining will need - it all adds up.

What are your feelings about buying used hardware? You can get a fairly cheap setup using older hardware.
 

artbird309

New Member
Feb 19, 2013
25
2
3
I would think about moving Plex outside your storage VM, They will be competing for resources and it will be much harder to control them. I would run Plex in its own VM so you can control the resources it has much better and it doesn't try to take them from ZFS or the other way around. This is what I do.

Also the Avoton C2750 does not support VT-D so you would not be able to pass the HBA to the storage VM. It is nice to have the integrated HBA on the board but if you can't find a board you like with that you can just get a IBM M1015 pretty cheap from eBay.

Like @RTM said you can save a lot of money getting used hardware, there are plenty of E5500 systems that should fit your needs used.
 

jonaspaulo

Member
Jan 26, 2015
44
2
8
37
Hi,

Thanks a lot for the tips. Yes I was about to make that correction on the Avoton. It doesn't support vt-d. And 16GB sticks are also expensive you are right.

I think my problem is that I want the most versatile server I can get and that is a little difficult to achieve.
Also I have been reading that FreeNas sometimes behaves oddly in a VM environment, so at this point since I want reliability and hassle free storage, I guess the best route is to have a dedicated machine just for Storage and other one for the VMs. The storage one can be the Avoton with 32GB Ram, I think it will be enough, but if I compare the full price of the system to let's say a synology it shouldn't be too far off , and with the latter I have less headaches than with a FreeNas box.



Regarding used hardware, I am not very keen on. But if a good opportunity arises and from a trustable source I might consider it.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Consider running Plex in a separate VM. Cyberjock is going to be rude enough towards you for running FreeNAS in a VM; forget about any plugin support from the FreeNAS community if you start nesting jails inside of a VM.

:D

The C2750D4I will have a much lower idle vs the X10SL7, but lack of VT-d will be a killer for virtualizing your NAS. Also keep in mind that dual CPU boards have higher idle power consumption than single CPU boards even when only populated with a single CPU.

I completed a mini-ITX FreeNAS project about 15 months ago, but am already upgrading due to limited I/O expansion. I caught the Inifiband bug but the single slot in my S1200KP is occupied with a M1015 adapter. I currently have one NAS serving my home and a second serving my lab - my project is to roll them both into VM's on a single hardware platform. Going with used hardware is going to save me at least $400-500 ... give it some thought.

One last knock against that C2750DI ... those Marvel SE controllers are not well supported under FreeBSD 9.x, which the current releases of FreeNAS are based upon. I believe there are also issues with Solaris (Napp-It, Nexenta). You stated you were planning six spinning drives - there are six SATA ports available via the SoC, but again this hardware platform would limit future expansion possibilities for your NAS: adding L2ARC and/or ZIL/SLOG would be sketchy propositions.
 
  • Like
Reactions: jonaspaulo

jonaspaulo

Member
Jan 26, 2015
44
2
8
37
Hm I see. So Avoton off the window also lol.
Maybe the best route is the Synology for the Storage part (although I would have to reduce to 4 slots) to keep it in an "affordable" price range. And then I should pursue The Xeon path to the other VMs?
 

lundrog

Member
Jan 23, 2015
75
8
8
43
Minnesota
vroger.com
Off the top. I would do two or three R710 or C100 used servers with 96 / 144GB of ram. Then build a Nexentastor, or Nexenta box or freenas etc, out of a white box. OR buy a couple DAS boxes to chain off the ESXi Servers.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I'd never recommend OTS NAS over DIY for someone comfortable with building systems. While the ASRock C2750di platform limits expansion capacity, a Synology eliminates it completely and will not perform nearly as well, all for significantly more money! :) And it isn't so much the Avoton as it the fact that you only find them in MiniITX sized boards - that is the real limiting factor.

If you neetwa separate platform for NAS and VMs, how about a low power Xeon for your NAS? My NAS's are currently hosted on a Pentium G550 and an i3-2310M, migrating to an 1156 Xeon L3426. The need for VT-d means I'm doing it sort of backwards ... low power Xeon for the ESXi box which will host the NAS VMs, and a pair of SuperMicro 2750-based systems for my other VMs.

If you aren't running 24/7 and can tolerate the power and physical characteristics, the value in purchasing used data center grade equipment cannot be overstated. Power and size efficiency is a relatively new thing, so used gear is off the table. For what I paid for my SuperMicro 2750 servers in 11/2013 I could have gotten two C6100's, but I would have had nowhere to put them. I definitely pay a premium for small footprint power efficient gear, the upside is that I don't worry as much about leaving them running.
 

jonaspaulo

Member
Jan 26, 2015
44
2
8
37
Hi Creo,
But having a machine dedicated to the NAS, I don't need vt-d since I would run FreeNAS directly on the hardware without any virtualization.
Regarding the VMs yes I agree, I can use the Avoton boards. Too bad is the cost of the 16GB sticks as mentioned before.
I also need to keep it low profile and I haven't no physical space to have 1U or 2U equipment at home.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Hi Creo,
But having a machine dedicated to the NAS, I don't need vt-d since I would run FreeNAS directly on the hardware without any virtualization.
Very true. I had always heard of people running NAS as VMs, but it is only recently that I found a reason to pursue the same.

I have a NAS (HomeNAS) that stores documents, software, media, etc. I recently built a second NAS (LabNAS) which I use to serve high-speed, low latency block storage to my lab hosts. Both are built on MiniITX platforms: HomeNAS has ECC, eight consumer drives in RAIDZ2, but lacks ZIL, L2ARC, and Infiniband. LabNAS has SLC ZIL , MLC L2ARC, four mirrored enterprise drives and Infiniband, but no ECC. Neither have any further capacity for drive expansion.

I would like LabNAS to have ECC and six drives in RAIDZ2, but having built it on a MiniITX platform there is no upgrade path to that. I would like HomeNAS to have Infiniband and 11 drives in RAIDZ3, but having built it on a MiniITX platform there is no upgrade path to that. Instead of upgrading two sets of hardware, I will be rolling both into a single hardware platform, and using directpath I/O to pass drive controllers to the VMs. This provides savings in hardware cost, and virtualizing hardware reduces the complexity of managing the somewhat esoteric Infiniband over IP configuration on multiple software platforms.


Regarding the VMs yes I agree, I can use the Avoton boards. Too bad is the cost of the 16GB sticks as mentioned before.
I also need to keep it low profile and I haven't no physical space to have 1U or 2U equipment at home.
Many people overestimate their need for RAM in a virtualized home lab. ESXi does a fantastic job of memory deduplication. You may find that you can get by with less than 64GB per VM host.
 

Hank C

Active Member
Jun 16, 2014
644
66
28
Very true. I had always heard of people running NAS as VMs, but it is only recently that I found a reason to pursue the same.

I have a NAS (HomeNAS) that stores documents, software, media, etc. I recently built a second NAS (LabNAS) which I use to serve high-speed, low latency block storage to my lab hosts. Both are built on MiniITX platforms: HomeNAS has ECC, eight consumer drives in RAIDZ2, but lacks ZIL, L2ARC, and Infiniband. LabNAS has SLC ZIL , MLC L2ARC, four mirrored enterprise drives and Infiniband, but no ECC. Neither have any further capacity for drive expansion.

I would like LabNAS to have ECC and six drives in RAIDZ2, but having built it on a MiniITX platform there is no upgrade path to that. I would like HomeNAS to have Infiniband and 11 drives in RAIDZ3, but having built it on a MiniITX platform there is no upgrade path to that. Instead of upgrading two sets of hardware, I will be rolling both into a single hardware platform, and using directpath I/O to pass drive controllers to the VMs. This provides savings in hardware cost, and virtualizing hardware reduces the complexity of managing the somewhat esoteric Infiniband over IP configuration on multiple software platforms.




Many people overestimate their need for RAM in a virtualized home lab. ESXi does a fantastic job of memory deduplication. You may find that you can get by with less than 64GB per VM host.

which NAS platform do you use?
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I have NexentaStor CE serving block storage to the lab as it has VAAI and I'm prepping for VCP. However I am going back to OmniOS+Napp-It as soon as all of the new server components arrive. I will likely reclaim some hardware from the current platforms and throw up a couple of temporary NexentaStor CE boxen to play with Storage vMotion; later use them as gluster bricks in an OpenStack deployment.

I use currently have media and documents hosted on FreeNAS 9.3 w/ZFS. I am planning on migrating that data to SnapRAID due to lighter resource requirements.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Do you find FREENAS has 100% uptime reliability with ESXi under duress? I thought it was one of those 95% jobs where it failed now and then?
 

markarr

Active Member
Oct 31, 2013
421
122
43
Do you find FREENAS has 100% uptime reliability with ESXi under duress? I thought it was one of those 95% jobs where it failed now and then?
I found I did not have very good reliability with freenas in esxi, it caused a couple PSOD. I was using a supermicro board with an opteron processor and ECC ram with a ibm 1050 passed through to the vm.
 

markarr

Active Member
Oct 31, 2013
421
122
43
If you are just doing generic file shares and vm storage then a lot of people on here use napp-it/omnios, you have to pay to get more features.
The free version of Nexentastor only allows for 18TB of hdd space, but has more features.

Both are supported as a vm and you wont get ridiculed on their forums vs freenas where @CreoleLakerFan was right Cyberjock will point you to the thread to show why you shouldn't virtualize your nas.

I am running Freenas on a ASRock Q1900m with 12 spinny drives for media and plex and then a esxi host with a LSI 9212-4i4e with 4 ssd in a raid 10. I have spare hardware laying around so I play around with my setup and change it every couple months.
 

jonaspaulo

Member
Jan 26, 2015
44
2
8
37
Hi markarr,

Regarding the NAS OS, I just set with FreeNAS because I know a bit about the OS, but I am open to suggestions. I will look into those also when I have the hardware.
When you say Nexentastor only allows 18TB are those the total sum of disks or the usable space? (in my setup 6 x4TB in raidz2 will give me 24b total but only 16tb usable)

I have noticed that bashing on FreeNAS forums for people that go with the VM route.

The use of NAS will be only iscsi/nfs simple shares two 2 laptops and the Plex server wherever it resides inside the host. Therefore not sure what is the best route here NAS OS wise.

Regarding my setup I still have some doubts.
My plan is to leave the LSI in HBA mode dedicated only to the NAS OS VM, and then run all the other VMs from the SSD datastore (Maybe I will go the crucial 512gb ssd to have some more room).
And the USB 8gb drive to run ESXi.

Am I thinking correctly over here?



Also I got some feedback to choose 1620V2 instead of 2620v2 (more ghz vs more cores).
Besides FreeNAS which isn't very dependent on CPU i guess, the "main" VMs (the virtualized routers) will run with 1GB RAM and 2 vCPUs each) and that was why I was pending to more cores versus more speed. Is this correct?



Thanks
 
Last edited:

markarr

Active Member
Oct 31, 2013
421
122
43
Not sure on what Nextastore uses for the 18tb limit. I would look at napp-it for just a file/block share os.

For the nas os vm setup your on the right track, as for your vm datastore what were you thinking? The crucial ssd falsely state they have powerloss protection, if you are looking for that then I would look at the intel 3500/3700.

What kind of routers are you running? It depends on the os but some of them cannot take advantage of mutli-threading so giving them 2 cpu would actually slow them down, plus you have to remember that a hypervisor schedules the cpus so if you have too many vms with multiple cores you run into scheduling issues cause esxi doesn't execute out of order.