Ok, noob questions... ESX and freenas

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

spyrule

Active Member
Ok,

So here's my current hardware.

This is built "on the cheap" for now, and will eventually get replaced with proper server grade hardware.

AMD A8-3870 3.3Ghz 8 cores total
ASUS f1a75-m pro motherboard
8 GB DDR3 (will be going to 20GB within a week or two).
750W PSU 80%
Lian-Li PC-V354 Case (7 3.5" internal HDD bays in a MiniITX form factor)

Network :
Cisco SG 200-08 8 port Semi managed Gig Switch
ASUS RT-N66U running Tomato


So I originally built this system with the intent that it would be a FreeNAS NAS and that was it.

However, recently with some new employment I've been using ESX 5.1-5.5, and Windows Server 2012. So I am upgrading my skills to match on my own time.

My new setup needs to hit a few specific target goals :

1) Must allow ESXi and VM's to be setup correctly.
2) Must have resilient drive system to greatly reduce loss of data (for the wife's backup plan, she cannot use cloud services for legal reasons).
3) Would like to offer Time Machine functionlity for Wife's Macbook Pro backup.

So with this in mind I decided to upgrade this existing NAS as cheaply as possible to work with ESX.

The only thing that wasn't working was using the on-board RAID controller to build a bigger array of drives. So I decided to buy a Raid Card.

I got this :

LSI Megaraid 84016e 16 port RAID controller with 256MB RAM and BBU, plus 2 x MiniSAS 8087 to 4 x SATA3 breakout cables ($155 total after shipping).
and I decided that my 3-4 year old 1TB WD Greens was probably on their last legs, so I purchased 4 x WD Red 2TB drives (2 x 2 sets of different manuf batches) and 2 x Seagate NAS drives.

My plan was to run this in RAID 6 (7.5TB total usable space), so that I could run a File server to allow my wifes business to backup some of her data to my server as a backup mechanism (not perfect I know, but it's a brand new business so very little money ATM).

So I got building ESX, had a few minor hiccups with the RAID card, but eventually got it all sorted out. Couldn't get the ESX installer to see my SD card, so I installed it on a 30GB SSD that I had remaining.

Due to this though, I ordered a SATA to SD Card adapter ($9), that will allow me to use one of the motherboard SATA ports as a bootable SD, freeing up my SSD so it can be used as VM Memory cache if needed.

I also decided to order an Intel Pro/1000 PT Dual NIC PCI-e card ($20) to allow for better network support within ESXi (it's pci-e, so I figured it should be useful for me for quite some time, plus it's on the VMWare HCL).


Alas, I come to my conundrum....

So I played a bit with ESX, all was good, and then I got around to setting up a File server for backups and general storage, and I THEN realized that FreeNAS prefers RaidZ. Quite clearly it is strongly suggested to NOT run a RaidZ array on a hardware array (which makes perfect sense to me). However, the 84016e does NOT support JBOD, and I have yet to find an IT firmware flash that supports it.

So here's my question:

WITHOUT spending more money on hardware...

Which scenario makes more sense :

A) Re-build my arrays, but only allocate 2-3TB for ESXi, and leave the remaining space unraided. Setup FreeNAS as a VM in ESXi, and passthrough the LSI's unused space to have ZFS put on it?
B) Install ESX, create a FreeNAS VM, pass through the ENTIRE 7.5TB of space to create one large storage, then share that storage as an iSCSI target, and have ESX's remaining VM's use the iSCSI drive as storage?
C) Don't use the RAID card at all, switch back to the on-board SATA ports, and setup the remaining as per B:
D) Bite the damn bullet, sell the 84016e card, buy a HBA (M1015 for example), and then setup as per option A:

E) Say "F#$% it", and create one large 7.5TB Raid 6 array, and install freeNAS with it's storage as another vmdk on top of the Raid 6 array.


Suggestions, opinions are welcome.

Please don't bother with the "buy a bunch of server grade hardware. When I can afford it, I will.

- Spyrule
 
Last edited:

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
For your situation I personally would go with D.

Since you are using ESXI you have other VM's that you will need to consider as well.

As far as freenas goes it knows no different that its sitting on top of a RAID-6 array as that is transparent to the vm unless you tried to pass through the raw disk as rdm (which I would not recommend for your situation).

One of the benefits of visualization is that you have that flexibility to do option D and the vm doest know the wiser.



Also FYI make sure you do not allocate the ENTIRE datastore to Freenas and leave some space behind (I generally go with 10 - 15 % minimum).

You will have problems with snapshots if you use the entire datastore for 1 vm (if you are planning on using the snapshot function).
 
Last edited:

spyrule

Active Member
Are you referring to Napp-it ?

I'm finding lots of threads talking about Gea's all-in-one, but cannot find the actual original thread from what I can tell.

Also, John, do you know any place in Ottawa where I can buy the M1015 locally ? (I really wanted to get this server up and running this weekend).

Thanks!
 

spyrule

Active Member
Actually, I was just reading this as an option:

Setup each drive as it's own VD within the RAID controller, and then pass each drive to a RaidZ setup. From what I understand, RaidZ can take advantage of the Raid controllers memory and BBU (256M and 680mAh BBU).

I know I completely undermine the point of the RAID card at this point, but I would save me having to buy another HBA for now.
 

vegaman

Member
Sep 12, 2013
60
3
8
Auckland, New Zealand
I'd avoid that if you can. The RAID controller will write that info to the disks, so you have to make sure it doesn't get overwritten and the pool might not be recognised without the controller. You also lose access to SMART.
 

Mike

Member
May 29, 2012
482
16
18
EU
You can do raw mappings, without the vmwarezFS on them, to get almost the same as you would with passthrough. It's not the end of the world.
 

spyrule

Active Member

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
honestly - esxi isn't designed for DAS at all. The best features (SIOC/NIOC) really work out if you present storage as a lun using fc/nfs/iscsi - seriously.

Setup VSAN/NEXENTA/lefthand whatever it takes - run your vm's thick (VAAI ZERO EAGER) with SIOC/NIOC and watch your vm's fly. Try to run the same number of vm's on DAS and watch the fair sharing rip apart your disk i/o performance.

hell do a DAS to DAS vmotion with a thin provision 1TB vm. I bet it will take 10 times longer than a VAAI thick provision setup.

true.
 

Mike

Member
May 29, 2012
482
16
18
EU
honestly - esxi isn't designed for DAS at all. The best features (SIOC/NIOC) really work out if you present storage as a lun using fc/nfs/iscsi - seriously.

Setup VSAN/NEXENTA/lefthand whatever it takes - run your vm's thick (VAAI ZERO EAGER) with SIOC/NIOC and watch your vm's fly. Try to run the same number of vm's on DAS and watch the fair sharing rip apart your disk i/o performance.

hell do a DAS to DAS vmotion with a thin provision 1TB vm. I bet it will take 10 times longer than a VAAI thick provision setup.

true.
It won't matter, he can use all the fancy limitations of esx by going the NFS/iSCSI route after he passed his DAS storage to that VM. pcie passthrough or a raw mapping, it's both unsupported and crap from an enterprise point of view. It won't limit your all in one though.
 

spyrule

Active Member
well, here's my current train of thought, after looking at this situation further (and I honestly didn't do a lot/very minimal pre-planning so it's my fault, and I know it) :

Instead of an ESX server, I'm going to attempt to setup the same server with a rolled Xpenology DSM 4.2, since it contains everything I want minus the VM hypervisor (thee main item I need is a time machine local backup for my wife). I will then (for the next few weeks only) use my desktop (core i7, 24GB Ram) to run the VM's I need for learning via Vbox using VMDK's, and then in a few weeks I might buy a cheap Dell server (I can get a PE2950 for $200 locally) just to host ESX. Although, I might hold off for a month or two and see if I can find a C6100 for cheap (less then $800 after shipping) if I can swing it.

This will allow me to emulate ESXi -> iSCSI/NFS with hardware, and I might run a silent copy of an ESXi host in a VM on my workstation (which currently isn't heavily used, and can afford a full-time VM if I want it), and that way I can learn how to do proper vMotion and vStorage (and any other multi-server situations for learning purposes).

Side question: Anybody know how loud a Dell C6100 is ?
 

Mike

Member
May 29, 2012
482
16
18
EU
200 bucks for that heater may be tad much. I'm sure the people over here know of a slightly newer cheap alternative...
Aren't there single node c6100s, comparable HP's or ibms with s1366 gear?
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Side question: Anybody know how loud a Dell C6100 is ?
Well that depends upon the configuration, and the load, and how far away you are. Not to mention that my "unacceptable" level may well be "acceptable" for others.

I've run my C6100 in "standard" configuration with 4 nodes operational and standard fans and it's rather loud. Too loud for my liking, but I'm coming from a "quiet, water cooled workstation for home use" background, not an enterprise background. The 4x internal fans are "worst case" sized for a 4x node (possibly with mezzanine and PCI-e cards), 8x cpu, 48x DIMMs and 12x SAS drives. Not an issue for the "enterprise", quite likely an issue for home use.

I've also run my C6100 with only 2 nodes populated with L5639's, Supermicro SNK-P0038P Passive 2U Heatsinks ( a cheat as as I dont install the "upper" nodes to allow the 2U heatsinks to be used) and Evercool Fans ( see PigLover's Taming the C6100 Thread ) and the result is whisper quiet idle, and "very acceptable" under load. I sit 3' from the C6100 in a rack so I consider myself to be on the far left of the "acceptable" noise curve.