Hello, I have been reading up on Windows 2016 Server lately and I think I might need to move to it for what I need to set up.
here is my issue:
I have 2 server running Windows Server 2012 R2
in a Hyper-V failover cluster configuration.
Hardware
Server 1 :
DELL PowerEdge R730xd Server
Dual Intel Xeon E5-2620 v3 2.1GHz
64GB RAM
PERC H730 Integrated Mini RAID Controller
2, 200GB 2.5" SSD in RAID-1 in Flex Bay for OS
4, 2TB 3.5" Hard Drives in RAID-10
Broadcom 5720 QP 1Gb Network Daughter Card -- 4 x 1GB port
Intel Ethernet X540 DP 10GBASE-T Server Adapter -- 2 x 10Gb ports
Server 2: Identical to Server one except it uses newer Intel Xeon E5-2620 v4 2.1GHz
both are setup using Windows 2012 R2 Datacenter SKU
each server is a Hyper-V host in a failover cluster.
I am using Starwind vSAN setup for iSCSI shared storage to provide CSV for the cluster without external SAN. that is I am using the internal storage of each host (4TB raid-10 volume) via StarWind.
I am running my DC/AD/DHCP/DNS in VMs, 2 VMs with Server 2012 R2 running one on each node
with DNS and DHCP scopes replication/load sharing config.
it seems I am running out of space faster than anticipated so I will need to expand the storage.
and as luck may have it I will need to dump and recreate the raid as the controller does not support raid live expansion , especially raid-10 setup. I figure I might want to look into upgrading to Windows 2016 and thus I want to see what would be the best setup to plan for the future
so my configuration is a 2-Node hyper-V failover cluster.
no external shared storage of any kind.
no plans to add any extra hardware of any kind.
I have an extra server with server 2012 R2 now that also have a hyper-v loaded but
it is an old DELL PowerEdge R310 with 16GB RAM and old X3430@2.40GHz CPU
it does have 3TB of storage on it , not sure what config but a raid of some sort.
but I do not consider it as the config is greatly differs from the main servers.
so my questions are as follows:
#1 It seem that windows 2016 server can eliminate the need for using Starwind vSAN for me if I can use Storage spaces direct replication. have anyone done this kind of setup and is it actually feasible ?
based on my research it seems I can run a Hyper-V role and a what may look like SoFs on each host if I put them in a cluster setup. can anyone confirm this? ELUA suggests it is possible.
#2. if #1 is confirmed, what would be the proper setup for doing this.
if you can point me to a nice how-to for setting up a 2-node hyper-v cluster on windows 2016 with s2d pleas help.
#3. if #1 and #2 are a go, what would be the best way to convert my current setup to new one without disrupting the user? I mean I barely have time to shut things down here. I did the current setup and config live, one server at a time by temporarily loading the old R310 with same license as other 2 ,
moving all VMs to it while I build out the servers. than adding it to the cluster and migrate the VMs to new servers. one at a time.
problem with that is that old server have only 2 1Gb ports
so migration took very long time. and there were other issues.
I delt with them ok but don't want to do this again. also as the hardware is so different it simply may not be possible anyhow.
thanks Vl.
here is my issue:
I have 2 server running Windows Server 2012 R2
in a Hyper-V failover cluster configuration.
Hardware
Server 1 :
DELL PowerEdge R730xd Server
Dual Intel Xeon E5-2620 v3 2.1GHz
64GB RAM
PERC H730 Integrated Mini RAID Controller
2, 200GB 2.5" SSD in RAID-1 in Flex Bay for OS
4, 2TB 3.5" Hard Drives in RAID-10
Broadcom 5720 QP 1Gb Network Daughter Card -- 4 x 1GB port
Intel Ethernet X540 DP 10GBASE-T Server Adapter -- 2 x 10Gb ports
Server 2: Identical to Server one except it uses newer Intel Xeon E5-2620 v4 2.1GHz
both are setup using Windows 2012 R2 Datacenter SKU
each server is a Hyper-V host in a failover cluster.
I am using Starwind vSAN setup for iSCSI shared storage to provide CSV for the cluster without external SAN. that is I am using the internal storage of each host (4TB raid-10 volume) via StarWind.
I am running my DC/AD/DHCP/DNS in VMs, 2 VMs with Server 2012 R2 running one on each node
with DNS and DHCP scopes replication/load sharing config.
it seems I am running out of space faster than anticipated so I will need to expand the storage.
and as luck may have it I will need to dump and recreate the raid as the controller does not support raid live expansion , especially raid-10 setup. I figure I might want to look into upgrading to Windows 2016 and thus I want to see what would be the best setup to plan for the future
so my configuration is a 2-Node hyper-V failover cluster.
no external shared storage of any kind.
no plans to add any extra hardware of any kind.
I have an extra server with server 2012 R2 now that also have a hyper-v loaded but
it is an old DELL PowerEdge R310 with 16GB RAM and old X3430@2.40GHz CPU
it does have 3TB of storage on it , not sure what config but a raid of some sort.
but I do not consider it as the config is greatly differs from the main servers.
so my questions are as follows:
#1 It seem that windows 2016 server can eliminate the need for using Starwind vSAN for me if I can use Storage spaces direct replication. have anyone done this kind of setup and is it actually feasible ?
based on my research it seems I can run a Hyper-V role and a what may look like SoFs on each host if I put them in a cluster setup. can anyone confirm this? ELUA suggests it is possible.
#2. if #1 is confirmed, what would be the proper setup for doing this.
if you can point me to a nice how-to for setting up a 2-node hyper-v cluster on windows 2016 with s2d pleas help.
#3. if #1 and #2 are a go, what would be the best way to convert my current setup to new one without disrupting the user? I mean I barely have time to shut things down here. I did the current setup and config live, one server at a time by temporarily loading the old R310 with same license as other 2 ,
moving all VMs to it while I build out the servers. than adding it to the cluster and migrate the VMs to new servers. one at a time.
problem with that is that old server have only 2 1Gb ports
so migration took very long time. and there were other issues.
I delt with them ok but don't want to do this again. also as the hardware is so different it simply may not be possible anyhow.
thanks Vl.