ZFSguru in Hyper-V

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Synthetickiller

New Member
Jul 16, 2011
26
0
1
I haven't found a lot of info on running a ZFS system in a VM. I wanted to do this so I don't have to build two physical systems.

I have enough hardware to run it without a performance hit:
C204 motherboard, Xeon E3 1270, 8 or 12gb DDR3, 6 sata ports.

I wanted to set up a 4 drive Raid Z using the onboard Sata 3 ports. Sata 6 isn't necessary as my network is only 1Gbps and that will be the bottleneck.

Does anyone have experience setting this up or a good walkthrough? I am new to Raid storage and I have limited experience with VMs in Server 2008 R2.

Any help is most appreciated.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Well Solaris isnt a supported HV guest OS, so that is probably why you dont see anything about it.

Does it mean it can be done though. Also currently ESXi is the only hypervisor to support VT-d for controller pass through.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
For that amount of storage go to http://www.vmware.com download ESXi and get the license then install it.
Next get vSphere Client and install that.
After that is complete, you do the ZFSguru installation in the ESXi instance from the vSphere client onto the OS disk.

With ZFS you typically want to have the storage pool disks passed through on a separate controller to the VM.
 

Synthetickiller

New Member
Jul 16, 2011
26
0
1
For that amount of storage go to http://www.vmware.com download ESXi and get the license then install it.
Next get vSphere Client and install that.
After that is complete, you do the ZFSguru installation in the ESXi instance from the vSphere client onto the OS disk.

With ZFS you typically want to have the storage pool disks passed through on a separate controller to the VM.
Thanks. I wanted to have an idea of what I'm looking for before starting this project. I'm waiting on ram, so I can't start today.

I'm trying to figure out the most cost effective way to go about doing this, but something that is manageable. I plan on having four 3TB drives in Raid Z (so, about 10TB usable, right?).

If the cost of a raid card is around the same as a low end board & cpu that supports ECC, that's might be better. I'm not sure yet.

Edit:

For the cost of a low power rig to stand by my server, I can just invest the cost into a raid card.

When setting up ZFS in the VM, the raid card should be set to JBOD in the raid bios?
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
Preferable (if possible): reflash the card to IT firmware (which turns it into a simple HBA). Pass that HBA through using vt-d. Put a cheap, small HD on a mobo sata port for the ESXi local datastore, and install zfsguru on that. Disclaimer: Vmware does not support vt-d passthru for HBAs, although many people have done this successfully.
 
Last edited:

Synthetickiller

New Member
Jul 16, 2011
26
0
1
Thanks guys.

I have a ton of ram on the way. Some 1333 and 1066 stuff. More of the 1066 (I can get higher density sticks at least).

Think I'll take a noticeable performance hit? I'll have more ram w/ 1066mhz than I can use (32gb probably) instead of 16gb w/ 1333.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
People report (and I confirm) near-native I/O performance. A suggestion: the SAN VM (in your case zfsguru, but I would prefer openindiana or solaris express 11, both free) will need to export a share to ESXi for the "SAN datastore" the other VMs will live on. That can be NFS or iSCSI. NFS is recommended, since it seems more stable with ESXi. If you do this, make sure to set the sync property on the share to disabled, so writes to it from ESXi are not stuck waiting for the disk write to finish. This is easy with solaris, not so sure about zfsguru (who dev just now returned from a break - but who knows if that will re-occur?)