Feasible setup for WHSv2?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NfiniteZERO

New Member
Feb 2, 2011
9
0
0
Looking for some input on the software side of things, namely how feasible is it to make ZFS and WHSv2 play nicely with each other. Sorry in advance if I don't come across clearly, as I'm still learning about all this.

Basically, I want to leverage ZFS to handle the storage end of the house in lieu of hardware RAID and the disks forwarded to a VM (in the same physical box) running WHSv2 either through the hypervisor or iSCSI.

Wishful thinking or hopeless pipe dream?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
This is one of those things that I have been working on for awhile. Hyper-V is a no-go so you are basically going to want to try Xen or ESXi.

What you really want is for ZFS to handle storage, and you want the WHS V2 application layer. That is a pretty understandable goal. FlexRAID may end up being a decent alternative. The whole iSCSI sharing between VMs works, but it really isn't optimal. This is a side project of mine so feel free to keep us updated with your results and I will do likewise.
 

NfiniteZERO

New Member
Feb 2, 2011
9
0
0
Thanks for the reply Patrick. I'll try to play around on my sandbox rig when I can get the chance.

Here's the web page that got the gears going - http://hub.opensolaris.org/bin/view/User+Group+qosug/zfs_iscsi_integration

Something I've been thinking about - how about in a dual box setup with the WHSv2 box connecting to a Solaris box on a dedicated router for iSCSI traffic and have another port on the WHSv2 box for service to client boxes?

I'll give FlexRAID a look while I'm at it.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That is possible, however you may want to do something like a point-to-point 10GbE connection between the two boxes.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Patrick my whole RAID debacle got me thinking about this as well. Why is it that ZFS on Hyper-v isnt working well for you or was it the iscsi and VMs?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
ZFS basically means FreeBSD or a Solaris variant. Hyper-V is bad for this. I tried installing FreeNAS in Hyper-V, but with the legacy NIC performance was poor.

On the iSCSI thing, I understand the desire. Having a different chassis is rough because if it goes offline, the WHS (V1) instance thinks the world ended. Personally, Nexenta is pretty awesome for this.

BTW Nitro, depending on how early I get home tonight I can see if I have the ability to rehab my 1680LP. If it works, maybe I can send a loaner if you need.
 

NfiniteZERO

New Member
Feb 2, 2011
9
0
0
I can definitely see where that setup would be a pain. Hrm, too bad I can't get a grant to test these kinds of hardware setups. Worse yet is a budget nazi looking over my shoulder.

Right now, I'm perusing using Solaris as a host OS and using VMware WS to run WHS as the VM (at least on paper). Problem is, how to get fast and reliable access to the ZFS stores. 10GbE is still well out of my price range for now. Oi, this is starting to make my head spin. I'm going to stop researching this for a little bit before I make my brain melt.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
We have a 10GbE thread with some adapter options. My 2x Intel cards for $258 shipped just arrived today.

Do keep us posted though.
 

NfiniteZERO

New Member
Feb 2, 2011
9
0
0
We have a 10GbE thread with some adapter options. My 2x Intel cards for $258 shipped just arrived today.

Do keep us posted though.
Not a bad deal. Looks like I'm going to hover around eBay to snipe some deals.

Sadly, I've got stuff coming that I have to deal with on the real life side of the house so it looks like my time in the lab is going to be limited. If I can get something to work, you'll be the first to hear about it.
 

No1451

New Member
Jan 1, 2011
32
0
0
This is one of those things that I have been working on for awhile. Hyper-V is a no-go so you are basically going to want to try Xen or ESXi.

What you really want is for ZFS to handle storage, and you want the WHS V2 application layer. That is a pretty understandable goal. FlexRAID may end up being a decent alternative. The whole iSCSI sharing between VMs works, but it really isn't optimal. This is a side project of mine so feel free to keep us updated with your results and I will do likewise.
Weird, I had the same idea as this guy just the other day! How non-optimal would this sort of setup be? Can anyone speculate as to whether it will be fast enough to saturate the theoretical limit of 1GbE? I really don't intend to do any link aggregation in my box, I just want better than my piss-poor 18 MB/s performance that I get currently.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
When I get back from London early next week I am going to re-purpose one of the servers to see if I can get this going. I really want to see it work also.

18MB/s is really bad though :) Maybe new thread with details? (disclaimer, I am traveling today and this weekend so I may not be able to respond right away)
 

No1451

New Member
Jan 1, 2011
32
0
0
When I get back from London early next week I am going to re-purpose one of the servers to see if I can get this going. I really want to see it work also.

18MB/s is really bad though :) Maybe new thread with details? (disclaimer, I am traveling today and this weekend so I may not be able to respond right away)
Great to hear, I'm not entirely sure HOW to even get this running in a single box so if you do a writeup that would be pretty great. My performance seems to be related to a combination of 4K sector drives and really shitty SIL3124 based sata addin cards.

I'll be keeping an eye out for this :D
 

jbraband

Member
Feb 23, 2011
44
0
6
NfiniteZERO and/or Patrick, have either of you seen progress on this all-in-one front? I'm mentally moving away from raid to a setup like whats described here. just wondering if there are any definitive routes to go that are acceptable.

I read through gea's all-in-one guide and it all makes sense, although he is suggesting the use of NFS to share the ZFS pools around the virtual switch. is the thinking that iSCSI traffic will not perform as well on the virtual switch because of increased overhead in how iSCSI traffic flows? my understanding is that windows OSes cannot use NFS shares (stores?) hence the need for an iSCSI target in the storeage VM (OI, S11E, etc...)

I'm itching the trigger on an E3 build but want my ducks in a row :D

something like this

ESXi type 1 hypervisor
VM1: OpenIndiana/Solaris 11 Express/etc with ZFS pool(s) setup as iSCSI target(s)
VM2: WHS2011 with an iSCSI initiator pointed at VM1. I suppose it might even be possible to house the WHS boot partition on the target as well but that may be overly complicating things

each VM1 and VM2 are attached via a dedicated virtual NIC to a 10GBe virtual switch dedicated to the iSCSI traffic

i think it may make more sense to have a dedicated mirrored ZFS pool shared with NFS back to the ESXi host for the boot vmdks of any VM other than VM1 (the storage VM)

its that "simple" right? :D
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I ended up with poor performance again with the iSCSI solution but am thinking I will try again on a multiple dodeca core setup and SBS2011E soon. Really just bandwidth starved on my end between work and trying to keep main site content flowing 5 days a week.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
jbraband

What are you using as a hypervisor? Hyper-v, ESX?
How many VMs?

Typically you will get better performance w/ less configuration using NFS.

For ESX its really a no brainer.
With iSCSI you have some limitations. Single Disk IO,VMFS vs RDMs, Zones, identical LUN Ids across ESX servers, you cant resize LUNs on the fly.

With NFS all of this goes away. VMDK thin provisioning by default, You can expand/decrease the NFS volume on the fly and realize the effect of the operation on the ESX server with the click of the refresh button., no vmfs or rdm decisions, no zones, HBAs, LUN ids. No single disk I/O queue, so your performance is strictly dependent upon the size of the pipe and the disk array.
You can have a Single mount point across multiple IP addesses and you can use link aggregation IEEE 802.3ad to increase the size of your pipe whereas with iSCSI you are restricted to 1gbps unless you have a 10gbps network (which most people dont).

Hyper-v isnt as complicated as ESX, and you should be able to access NFS shares on Server08 considering you can create an NFS share on Server08.
 

S-F

Member
Feb 9, 2011
148
5
18
I can't believe I hadn't seen this thread before. This is exactly what I want in functionality. I want the remote access and backups of WHS and the storage features of ZFS. I have been considering just dumping WHS all together and using ZFS as the main file server. Patrick, if you could get a write up on wow to do this on the main page I think it would probably be Earth shattering for a lot of people.
 

jbraband

Member
Feb 23, 2011
44
0
6
@nitrobass

well, right now i have nothing but a dream and an unpurchased newegg shopping cart :D

i have been all over the place planning this build the past couple weeks. zfs recently cropped up into my mind this week and i like what i see. now i am figuring out the best way to achieve it. then i'll decide if its worth it over other architectures.

my current hardware plan is generally an x9scm-F, E3-1230, 16GB ECC, 1 IBM M1015, etc...

last week, the system architecture was along these lines:
Host: Win2008R2 with Hyper-V
VM1: WHS2011
other VMs to serve basic use cases, remote desktop for browsing, a LAMP stack, OS testing, maybe a local MSSQL/IIS server (i.e. nothing significant)

i was going to Raid1 160GB drives on the onboard controller for the host, and maybe a 1TB Raid1 for VM vhds also on the onboard controller
6 hitachi 5k3000 on the M1015 in three Raid1 arrays, each passed to the WHS2011 VM for storage.

i had gotten past caring about having a large storage pool and having to segregate my media into 2TB chunks. I had also came to terms years ago that my WHS would have only 50% of its raw storage available (Raid1 having no disadvantages over DE in this regard). after plenty of reading, this week ZFS became a contender. I can increase the usable to raw disk space ratio, can utilize 2 parity drives without buying a $X00 raid 6 controller, and regain the advantages and conveniences of a large single pool. not to mention, the supported ability to run consumer drives in the zfs pools instead of running the well-trusted 5k3000 on a raid controller.

so, going down the ZFS road, I still want to keep WHS2011 around for the application layer it provides. My understanding from STH front page articles and post at [H] that ZFS is widely supported on BSD/Solaris variants and that BSD/Solaris and its variants don't play well with hyper-v (or the other way around). That is when i mentally switched over to thinking about ESXi for this setup.

I was stuck on iSCSI because i didn't think windows could mount NFS shares for some reason. that was my only reason for talking about iSCSI a couple posts ago.

i'm sure there's plenty i've left out and even more that'll confuse the hell out of people that know what they are doing. I can provide whatever else I can to clear anything up on what it is I am talking about. I was going to start a thread in the DIY storage server forum with the build specs, but this thread has sorta hijacked that for the time being

here are some of the [H] threads that inspired this new direction: http://hardforum.com/showthread.php?t=1579961, http://hardforum.com/showthread.php?p=1037031715
i want to refine the hardware/virtual architecture before I buy equipment. that seems the smart approach.
 

jbraband

Member
Feb 23, 2011
44
0
6
for what its worth, my use of the storage is going to be streaming HD mkvs and ISO to a sandy bridge HTPC over a 1gig network. I could easily go point to point on that, but dont think i'll have an issue with latency on the network with the dell 2724 i picked up.

oh, and of course crucial "can not loose" data, typical home server stuff, but more media streaming that average.
 

NfiniteZERO

New Member
Feb 2, 2011
9
0
0
Sorry for not posting in a while...been quite busy, to include a trip to a Middle Eastern country for a few months.

Sadly, due to shifts in budget priority and the loss of my extra box, I've yet to get around to some real tinkering on this. Though I'm still researching this off and on, I'm thinking about going with another option for my setup (if and when that time comes).