Help needed for Setting up Windows Server 2016 Hyper-V cluster

vl1969

Active Member
Feb 5, 2014
611
69
28
Hello, I have been reading up on Windows 2016 Server lately and I think I might need to move to it for what I need to set up.

here is my issue:
I have 2 server running Windows Server 2012 R2
in a Hyper-V failover cluster configuration.

Hardware
Server 1 :
DELL PowerEdge R730xd Server
Dual Intel Xeon E5-2620 v3 2.1GHz
64GB RAM
PERC H730 Integrated Mini RAID Controller
2, 200GB 2.5" SSD in RAID-1 in Flex Bay for OS
4, 2TB 3.5" Hard Drives in RAID-10
Broadcom 5720 QP 1Gb Network Daughter Card -- 4 x 1GB port
Intel Ethernet X540 DP 10GBASE-T Server Adapter -- 2 x 10Gb ports

Server 2: Identical to Server one except it uses newer Intel Xeon E5-2620 v4 2.1GHz

both are setup using Windows 2012 R2 Datacenter SKU
each server is a Hyper-V host in a failover cluster.
I am using Starwind vSAN setup for iSCSI shared storage to provide CSV for the cluster without external SAN. that is I am using the internal storage of each host (4TB raid-10 volume) via StarWind.

I am running my DC/AD/DHCP/DNS in VMs, 2 VMs with Server 2012 R2 running one on each node
with DNS and DHCP scopes replication/load sharing config.

it seems I am running out of space faster than anticipated so I will need to expand the storage.
and as luck may have it I will need to dump and recreate the raid as the controller does not support raid live expansion , especially raid-10 setup. I figure I might want to look into upgrading to Windows 2016 and thus I want to see what would be the best setup to plan for the future

so my configuration is a 2-Node hyper-V failover cluster.
no external shared storage of any kind.
no plans to add any extra hardware of any kind.
I have an extra server with server 2012 R2 now that also have a hyper-v loaded but
it is an old DELL PowerEdge R310 with 16GB RAM and old X3430@2.40GHz CPU
it does have 3TB of storage on it , not sure what config but a raid of some sort.
but I do not consider it as the config is greatly differs from the main servers.

so my questions are as follows:

#1 It seem that windows 2016 server can eliminate the need for using Starwind vSAN for me if I can use Storage spaces direct replication. have anyone done this kind of setup and is it actually feasible ?

based on my research it seems I can run a Hyper-V role and a what may look like SoFs on each host if I put them in a cluster setup. can anyone confirm this? ELUA suggests it is possible.


#2. if #1 is confirmed, what would be the proper setup for doing this.
if you can point me to a nice how-to for setting up a 2-node hyper-v cluster on windows 2016 with s2d pleas help.

#3. if #1 and #2 are a go, what would be the best way to convert my current setup to new one without disrupting the user? I mean I barely have time to shut things down here. I did the current setup and config live, one server at a time by temporarily loading the old R310 with same license as other 2 ,
moving all VMs to it while I build out the servers. than adding it to the cluster and migrate the VMs to new servers. one at a time.
problem with that is that old server have only 2 1Gb ports
so migration took very long time. and there were other issues.
I delt with them ok but don't want to do this again. also as the hardware is so different it simply may not be possible anyhow.

thanks Vl.
 

Connorise

Member
Mar 2, 2017
62
11
8
30
US. Cambridge
#1 You don't want to do that. I mean literally. S2D on two hosts is totally unserviceable. The minimum reasonable amount of servers for S2D starts from the 4 nodes, with 4 servers you could achieve benefits from Erasure coding and bump your storage efficiency.
If you would still consider the 2 nodes S2D, bear in mind it isn't resilient to any second failure, you patch your server once a week, drive in the alive node fails and you're done! Thereby, you are risking to loose all the data.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
ok, thanks, but how is it differs from what I have today?

I have a 2 node setup now(read the Original Post :) ). there is no possibility of getting any more in the nearest future.
if one host is down and second host fails I am down.
with Starwind (at least if I want to keep my current license and setup) I am limited to 2 nodes(I know that they dropped the limitations with new licensing schema but I do not want to go CLI only management).

now if there is a way to setup a s2d using 3 nodes where one is not equal in hardware to the 2 others I have a third server R310 that has some space on it. do you think I can setup a SoFS using 2 R730dx and one R310
and than build a failover cluster using the 2 R730 only?
I am kind of a noob in this.
I know the theory, I have managed to get the 2 node HV cluster going with server 2012 R2 and Starwind vSAN
and it even survived one node failure recently. I mean one node got offline and we didn't even noticed :).
I can reboot a node , no problem.
so it works and running.
all I am trying to change is my dependance on StarWind vSan. it is a bit confusing to setup and manage.
I thought that having things natively would be more manageable.
 

Net-Runner

Member
Feb 25, 2016
83
24
8
38
I do not want to go CLI only management.
Than you definitely should not go Storage Spaces Direct, since PowerShell is the only way to manage it more or less correctly starting from the very beginning - creating the S2D pool. I would remain with starwind for 2-node scenario since S2D isn't really great in small deployments yet.
 

AFisher

Member
Jun 2, 2017
45
11
8
why not use that r310 as a Storage server, taking the load of storage off the hyper-v hosts, and to the best of my knowledge there is little dependency with Hyper V and identical hardware, although I would not mix AMD with Intel, but the HAL handles most of the differences, unless you have some kind of pass though hardware.

Our setup is Hyper-V hosts for VMs only, and a commodity hardware FreeNAS setup (the r310 could handle this depending on drive count\size)
 

vl1969

Active Member
Feb 5, 2014
611
69
28
why not use that r310 as a Storage server, taking the load of storage off the hyper-v hosts, and to the best of my knowledge there is little dependency with Hyper V and identical hardware, although I would not mix AMD with Intel, but the HAL handles most of the differences, unless you have some kind of pass though hardware.

Our setup is Hyper-V hosts for VMs only, and a commodity hardware FreeNAS setup (the r310 could handle this depending on drive count\size)
well there are several reasons not to use r310 for storage,
a major one is that it only have 2 1GB nics. so the bandwidth is limited.
 

PnoT

Active Member
Mar 1, 2015
622
146
43
Texas
I would just stick with 2012 R2 and the setup you have until you can dedicated some time and money to properly setup a Hyper-V cluster with best practices in mind.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
I would just stick with 2012 R2 and the setup you have until you can dedicated some time and money to properly setup a Hyper-V cluster with best practices in mind.
please define Hyper-V cluster with best practices in mind?

as for dedicating some time and money, it would mean waiting until the current setup is up in flames with no recovery possible.
as I said in OP, this is a small company when it gets to IT.
the upper brass will do things that need to be done, but they need a huge push to move in the proper direction.
nothing short of tsunami will make it happen.
I got money for the current setup because we had 2 major servers go cubley one after the other.
so I got to replace one of them and when the second one whet I just push for cluster ready setup.
but 2 nodes was the best I could do.
 

Jeff Robertson

Active Member
Oct 18, 2016
423
113
43
Chico, CA
Hi folks, I am going to have to respectfully disagree about the use of S2D. It is fully serviceable (and supported) in a 2 node cluster and works great in that scenario. I use both starwind vsan and S2D and would recommend S2D every time if the hardware supports it. In a 2 node cluster you will always have more risk than if you had 3+ nodes of course. You can partially mitigate that by using N-1 (N=number of drives) for the volume, thus leaving a full drives space available in case one drive fails. It will (with some prodding) re-balance using the free space and you will be fine as long as a second drive doesn't fail during the rebalance.

S2D may or may not support the Perc h730 controller. It requires the HBA to be put in passthrough mode so that S2D can directly manage the disks, the h730 may not support that. You can find LSI 9300 cards on ebay cheap that would work great however (as well as 40Gb Connectx-3 cards which are awesome for a S2D back end). If you do decide on S2D there are lots of tutorials out there and it's really not has hard as it looks. I would be happy to send the outline that I use, it helps make sure you don't miss anything major.

Good luck!
 

Evan

Well-Known Member
Jan 6, 2016
3,258
563
113
S2D Biggest issue is the need for enterprise SKU , I feel it would get a lot more love if not for that.