Hyper-V 2012 and Storage Spaces

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ricoks

Member
Mar 12, 2012
38
0
6
Hey all, I've decided to go with Server 2012 standard with Hyper-V instead of ESXI. I want to try to play with RemoteFx, so I don't believe I can do that with ESXI as the hypervisor. (correct me if I'm wrong, tho)
So I installed Server 2012 with Hyper-V role and no other roles other than file management, and will install everything else as VMs.
I have a question re: Storage.
I currently have 4x3TB drives, 4x1.5TB drives, and 2x1TB drives (4x3TB with parity for storage, and 4x1.5TB with mirror for critical data - personal and photos, etc)
I have Server 2012 standard installed to a single 120GB SSD at the moment.

What would be the best config for passing storage to the VMs (Server 2012 essetials, a couple win7, win8, etc).
I was thinking of using storage spaces on the Server 2012 w/Hyper-V to create the pools, and disks, and then pass them to each of the VMs as needed - OR - pass the individual disks to each VM, and allow the VM to use them as needed, create their own storage pool, etc.
Which method do you think would be best practice? I haven't found anything online to discuss best practice for using Server 2012 as an all-in-one server. anyone else here using it instead of ESXI w/ ZFS ( i never could get ZFS to work right for me, but I'm a newb with it)??

thoughts, ideas....?????

Thanks
Ricoks
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
At an Intel press event. RemoteFX Touch with Windows 8 see here.

I want to play with that a bit :)
 

Radian

Member
Mar 1, 2011
59
5
8
Strange guest networking behavior in Hyper-V 2012, I'm trying to get my Win8, Server2012 E, storage server 2012 to pick up a static IP from the DHCP server. It fails to obtain the IP reservation based upon the static MAC address which I've reserved in the DHCP scope.

If I change the VM networking to dynamic MAC, DHCP works. I've installed a second NIC card in the VM and given it the same static MAC , same results.

Is there some sort of MAC cache in the Hyper v networks blocking the VM from obtain a new static IP from DHCP? Or is it something with these newer version guests?

Driving me nuts!
 

Triggerhappy

Member
Nov 18, 2012
51
11
8
pass the individual disks to each VM, and allow the VM to use them as needed, create their own storage pool, etc.
I thought the same way. All I can say is DO NOT DO THIS...

It works fine the first time. You pass through each disk to the vm, go to the vm, set up the storage space and boom, you're good. Everything works peachy.

Then you reboot your host for whatever reason and nothing works. Why? Because as far as I can tell, the storage space generation process makes the individual drives invisible to the host. Therefore it can't pass it back to the VM after a reboot and you're left with a storage pool that you can't access (you can see it in the Host OS as a pool but not accessible).

In the end I created the pools on the host OS, Created the Drives at the host OS, then passed these virtual drives to the Guest VM
 

MattCosa

New Member
Aug 2, 2013
1
0
0
Hi there,

Storage spaces IS NOT supported in guests.

Additionally, I wouldn't recommend Parity Spaces - just a waste of time. Stick with a mirror space and play around with the number of columns. If you're absolutely keen - the old rules apply - use it for sequential data or archive only.

Also, there should be no need to pass on your storage space into a VM like that as it provides no advantage and cripples any flexibility for the host. Stick with using VHDX for your guests.

With 2012R2 you can use tiered storage - have a think about adding an SSD or two :) 2012R2 will manually scan for "hot areas" and promote data to your fast tier - or alternatively you can pin files there manually. Hint - use boot from a USB stick and use that SSD for the tier.

Good luck!
 

Triggerhappy

Member
Nov 18, 2012
51
11
8
It's not supported for the guest to create its own storage space on pass-through hard drives or for the host to passthrough a virtual drive based on a storage space to the guest?
 

Darkytoo

Member
Jan 2, 2014
106
4
18
I tried storage space on server 2012 (not R2) and this may have changed, but here is the nightmare that is storage spaces under server 2012:
1. I was storing data on a partity space across three drives, the speed penalty was crazy. I moved them to a fakeraid drive and it was at least twice as fast.
2. Under 2012, once you create the storage space, you cannot make any changes to it. I wanted to add a drive to make the storage space larger...nope, not going to happen. Sure you can add the drive, but it was never used.
3. I've had bad experiences mapping disks directly to VMs, they disappear after reboots, and they also break migrations to other servers. So I alway created VHDs and then mapped them, the issues from #1 plus the penalty for using VHDs made the VMs crawl.

The storage tiering seems nice, but I got the feeling that storage spaces was a consumer level tech that was pushed into servers before it was ready. I found a 256mb 8 port RAID controller for $70 and never looked back.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I tried storage space on server 2012 (not R2) and this may have changed, but here is the nightmare that is storage spaces under server 2012:
1. I was storing data on a partity space across three drives, the speed penalty was crazy. I moved them to a fakeraid drive and it was at least twice as fast.
2. Under 2012, once you create the storage space, you cannot make any changes to it. I wanted to add a drive to make the storage space larger...nope, not going to happen. Sure you can add the drive, but it was never used.
3. I've had bad experiences mapping disks directly to VMs, they disappear after reboots, and they also break migrations to other servers. So I alway created VHDs and then mapped them, the issues from #1 plus the penalty for using VHDs made the VMs crawl.

The storage tiering seems nice, but I got the feeling that storage spaces was a consumer level tech that was pushed into servers before it was ready. I found a 256mb 8 port RAID controller for $70 and never looked back.
You can expand a storage space, but not always by adding a single disk. Your three-column parity space would have required you to add three new drives, for example. There are valid technical reasons for this, but I agree that they are annoying and a limitation that other implementations do not have. As for parity spaces, they are not worth the trouble. Just stick with mirroring.

Personally I am a fan of software RAID in general, especially after experiencing Oracle ASM for databases, and I have seen excellent performance from mirrored storage spaces. I might not be all that excited about Storage Spaces except for the fact that using Windows 2012 lets me have SMB3, which is phenomenal, and for 2012 you're supposed to use Storage Spaces.

Which brings it back to one of the original questions - how to use spaces for VMs. Personally, I either place VHD/VHDX files onto local Storage Space volumes for simplicity, or, for clustering, use a separate storage server with SMB3 over 10GbE or IPoIB to the VM hosts. The former is trivially easy and very flexible while the latter gives you SAN-like features and performance without the cost and complexity.
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
477
83
Actually you can expand a storage pool by any number of disks (up to the 160 disk max per pool). The Virtual disk that is in the pool will not gain any benefit unless you create a new Virtual disk with a new column count and/or interleave and migrate your data to the new Virtual disk.

if you created a Thick Virtual disk, you will not see any improvement since the pre-allocated slabs are on the old disks. if you created a thin Virtual disk, any new data will be written to new slabs and the new disks will tend to get used first, since they have more storage and performance to offer.

With R2 Adding SSD's (even only 4) to the storage pool makes a lot of performance increases. You gain the ability to specify a Write-Back cache for all virtual volumes. and for Mirror Spaces you gain the tiering. for Parity Spaces you gain journaling, which speeds up writes to the disks.

Chris
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
Thanks for this. I am at the point of figuring out how to configure the colo Hyper-V cluster.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
BTW, storage spaces works great on Hyper-V Server 2012 R2. You just have to remember to activate the "file services" role in order to build shares. Simple and Mirrored drives build easily using Server Manager from a remote host just like they do locally. More complicated things like setting up Journal or Write Cache for parity spaces still require powershell. I just ran the powershell commands from the HV Server console rather than try to figure out how to run powershell remote - PS is supposed to work remotely but I didn't bother trying to figure out how.
 

Ricoks

Member
Mar 12, 2012
38
0
6
Hi there,


With 2012R2 you can use tiered storage - have a think about adding an SSD or two :) 2012R2 will manually scan for "hot areas" and promote data to your fast tier - or alternatively you can pin files there manually. Hint - use boot from a USB stick and use that SSD for the tier.

Good luck!
Has anyone done this with Hyper-V? from what I've read online, it appears this is quite a hassle to get up and running. it was 'dumb' easy with ESXi, but seems quite clumbersome and tedious just to get the thing onto the drive. If it will work, I'd love to use the SSD within the pool for tier'ing and/or journaling, etc. - ANYTHING to help the write/read speeds on my parity space. (it's only movies, so not critical data, but would like to best utilize the space I have)

Thanks
Ricoks
 

Ricoks

Member
Mar 12, 2012
38
0
6
when you say you have, are you referring to booting from a USB stick, or are you referencing the SSD for tiering and caching?? (or both?)
I have read piglovers post, most of it is over my head. I'd only be using one SSD drive, as it is only for home use. I'm more interested in getting hyper-V to run off a USB tho.

Ricoks
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
I followed along with piglovers instructions and am rocking a dual parity setup 6 ssds in a cached/tiered pool and it is very fast in r2. Also have 3 other mirrored pools with 8 ssds each, and all are working great. Just a heads up if you do set this up use NTFS, refs has givenen me poor numbers in my all ssd pools. The power shell is not all that difficult just copy and paste what others have done and adjust to fit your situation.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Glad to read that at least a couple of people found that useful. Felt sorta lame geeking a whole weekend on it...but if it helped somebody else than I guess worth it.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Glad to read that at least a couple of people found that useful. Felt sorta lame geeking a whole weekend on it...but if it helped somebody else than I guess worth it.
You are being too modest. That thread has over 9000 views.
 

Ricoks

Member
Mar 12, 2012
38
0
6
Piglover,
do you think it's worth the effort to set it up with parity for journaling/caching with only 1 120Gb SSD, or is it only going to be worth it with multiple SSDs?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Piglover,
do you think it's worth the effort to set it up with parity for journaling/caching with only 1 120Gb SSD, or is it only going to be worth it with multiple SSDs?
With Storage Spaces you can't set up SSD journaling of a Parity volume using only one SSD. SS provisioning rules require that you maintain the same failure resiliency for the journal as you have for the whole volume. Since a parity volume (raid 5) is built to survive the loss of a single drive then the journal must also be able to survive the failure of one drive - therefore the Journal must be a multiple of 2 drives (2, 4, 6, 8, etc). The journal effectively operates as a striped volume.

Likewise, with a dual-parity volume (Raid 6) the journal must be able to survive the loss of any two drives. This means that the journal must contain a multiple of 3 drives (3, 6, 9, etc).

I suppose it would have been nice for MS to allow exceptions to this for people willing to work with the loss of resiliency, but their real target market is large data center applications built on JBOD deployments. In that environment the "risk tolerant" use case isn't all that interesting.