Ok,
So here's my current hardware.
This is built "on the cheap" for now, and will eventually get replaced with proper server grade hardware.
AMD A8-3870 3.3Ghz 8 cores total
ASUS f1a75-m pro motherboard
8 GB DDR3 (will be going to 20GB within a week or two).
750W PSU 80%
Lian-Li PC-V354 Case (7 3.5" internal HDD bays in a MiniITX form factor)
Network :
Cisco SG 200-08 8 port Semi managed Gig Switch
ASUS RT-N66U running Tomato
So I originally built this system with the intent that it would be a FreeNAS NAS and that was it.
However, recently with some new employment I've been using ESX 5.1-5.5, and Windows Server 2012. So I am upgrading my skills to match on my own time.
My new setup needs to hit a few specific target goals :
1) Must allow ESXi and VM's to be setup correctly.
2) Must have resilient drive system to greatly reduce loss of data (for the wife's backup plan, she cannot use cloud services for legal reasons).
3) Would like to offer Time Machine functionlity for Wife's Macbook Pro backup.
So with this in mind I decided to upgrade this existing NAS as cheaply as possible to work with ESX.
The only thing that wasn't working was using the on-board RAID controller to build a bigger array of drives. So I decided to buy a Raid Card.
I got this :
LSI Megaraid 84016e 16 port RAID controller with 256MB RAM and BBU, plus 2 x MiniSAS 8087 to 4 x SATA3 breakout cables ($155 total after shipping).
and I decided that my 3-4 year old 1TB WD Greens was probably on their last legs, so I purchased 4 x WD Red 2TB drives (2 x 2 sets of different manuf batches) and 2 x Seagate NAS drives.
My plan was to run this in RAID 6 (7.5TB total usable space), so that I could run a File server to allow my wifes business to backup some of her data to my server as a backup mechanism (not perfect I know, but it's a brand new business so very little money ATM).
So I got building ESX, had a few minor hiccups with the RAID card, but eventually got it all sorted out. Couldn't get the ESX installer to see my SD card, so I installed it on a 30GB SSD that I had remaining.
Due to this though, I ordered a SATA to SD Card adapter ($9), that will allow me to use one of the motherboard SATA ports as a bootable SD, freeing up my SSD so it can be used as VM Memory cache if needed.
I also decided to order an Intel Pro/1000 PT Dual NIC PCI-e card ($20) to allow for better network support within ESXi (it's pci-e, so I figured it should be useful for me for quite some time, plus it's on the VMWare HCL).
Alas, I come to my conundrum....
So I played a bit with ESX, all was good, and then I got around to setting up a File server for backups and general storage, and I THEN realized that FreeNAS prefers RaidZ. Quite clearly it is strongly suggested to NOT run a RaidZ array on a hardware array (which makes perfect sense to me). However, the 84016e does NOT support JBOD, and I have yet to find an IT firmware flash that supports it.
So here's my question:
WITHOUT spending more money on hardware...
Which scenario makes more sense :
A) Re-build my arrays, but only allocate 2-3TB for ESXi, and leave the remaining space unraided. Setup FreeNAS as a VM in ESXi, and passthrough the LSI's unused space to have ZFS put on it?
B) Install ESX, create a FreeNAS VM, pass through the ENTIRE 7.5TB of space to create one large storage, then share that storage as an iSCSI target, and have ESX's remaining VM's use the iSCSI drive as storage?
C) Don't use the RAID card at all, switch back to the on-board SATA ports, and setup the remaining as per B:
D) Bite the damn bullet, sell the 84016e card, buy a HBA (M1015 for example), and then setup as per option A:
E) Say "F#$% it", and create one large 7.5TB Raid 6 array, and install freeNAS with it's storage as another vmdk on top of the Raid 6 array.
Suggestions, opinions are welcome.
Please don't bother with the "buy a bunch of server grade hardware. When I can afford it, I will.
- Spyrule
So here's my current hardware.
This is built "on the cheap" for now, and will eventually get replaced with proper server grade hardware.
AMD A8-3870 3.3Ghz 8 cores total
ASUS f1a75-m pro motherboard
8 GB DDR3 (will be going to 20GB within a week or two).
750W PSU 80%
Lian-Li PC-V354 Case (7 3.5" internal HDD bays in a MiniITX form factor)
Network :
Cisco SG 200-08 8 port Semi managed Gig Switch
ASUS RT-N66U running Tomato
So I originally built this system with the intent that it would be a FreeNAS NAS and that was it.
However, recently with some new employment I've been using ESX 5.1-5.5, and Windows Server 2012. So I am upgrading my skills to match on my own time.
My new setup needs to hit a few specific target goals :
1) Must allow ESXi and VM's to be setup correctly.
2) Must have resilient drive system to greatly reduce loss of data (for the wife's backup plan, she cannot use cloud services for legal reasons).
3) Would like to offer Time Machine functionlity for Wife's Macbook Pro backup.
So with this in mind I decided to upgrade this existing NAS as cheaply as possible to work with ESX.
The only thing that wasn't working was using the on-board RAID controller to build a bigger array of drives. So I decided to buy a Raid Card.
I got this :
LSI Megaraid 84016e 16 port RAID controller with 256MB RAM and BBU, plus 2 x MiniSAS 8087 to 4 x SATA3 breakout cables ($155 total after shipping).
and I decided that my 3-4 year old 1TB WD Greens was probably on their last legs, so I purchased 4 x WD Red 2TB drives (2 x 2 sets of different manuf batches) and 2 x Seagate NAS drives.
My plan was to run this in RAID 6 (7.5TB total usable space), so that I could run a File server to allow my wifes business to backup some of her data to my server as a backup mechanism (not perfect I know, but it's a brand new business so very little money ATM).
So I got building ESX, had a few minor hiccups with the RAID card, but eventually got it all sorted out. Couldn't get the ESX installer to see my SD card, so I installed it on a 30GB SSD that I had remaining.
Due to this though, I ordered a SATA to SD Card adapter ($9), that will allow me to use one of the motherboard SATA ports as a bootable SD, freeing up my SSD so it can be used as VM Memory cache if needed.
I also decided to order an Intel Pro/1000 PT Dual NIC PCI-e card ($20) to allow for better network support within ESXi (it's pci-e, so I figured it should be useful for me for quite some time, plus it's on the VMWare HCL).
Alas, I come to my conundrum....
So I played a bit with ESX, all was good, and then I got around to setting up a File server for backups and general storage, and I THEN realized that FreeNAS prefers RaidZ. Quite clearly it is strongly suggested to NOT run a RaidZ array on a hardware array (which makes perfect sense to me). However, the 84016e does NOT support JBOD, and I have yet to find an IT firmware flash that supports it.
So here's my question:
WITHOUT spending more money on hardware...
Which scenario makes more sense :
A) Re-build my arrays, but only allocate 2-3TB for ESXi, and leave the remaining space unraided. Setup FreeNAS as a VM in ESXi, and passthrough the LSI's unused space to have ZFS put on it?
B) Install ESX, create a FreeNAS VM, pass through the ENTIRE 7.5TB of space to create one large storage, then share that storage as an iSCSI target, and have ESX's remaining VM's use the iSCSI drive as storage?
C) Don't use the RAID card at all, switch back to the on-board SATA ports, and setup the remaining as per B:
D) Bite the damn bullet, sell the 84016e card, buy a HBA (M1015 for example), and then setup as per option A:
E) Say "F#$% it", and create one large 7.5TB Raid 6 array, and install freeNAS with it's storage as another vmdk on top of the Raid 6 array.
Suggestions, opinions are welcome.
Please don't bother with the "buy a bunch of server grade hardware. When I can afford it, I will.
- Spyrule
Last edited: