VMWare 7 to 8 upgrade with hardware refresh - options and process

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hhp

New Member
Aug 3, 2016
20
3
3
55
I have an essentials plus VMWare cluster of HP pizza boxes with a Synology appliance for iSCSI cluster storage. I have been tasked with replacing the E5-v4 HP boxes with newer high spec Dell server we are pulling out of our EPC core. My issue is my employer is broke and cannot afford to replace the existing shared cluster storage. The synology is fully populated with all space dedicated to a single VMFS datastore used by the 3 host cluster.

I have been trying to think through how to keep the entire environment up while replacing the compute nodes with a newer generation (xeon Gold). Initially, the plan was a new SAN would be purchased for the new cluster being built with the Dell Gold nodes, but company finances are dire and I need to keep the old rs3617 'SAN' appliance in place. I was expecting to just spin up the new cluster and migrate the VMS using the VMW converter tool.

What is the best way to go about this upgrade (H/W refresh and upgrade to latest 8.x release), with the requirements that all existing VM's must remain up at all times and I only have the one VMW v7 iscsi cluster data store serving the current live environment?

I am not sure how this will work out with the new vCLS mechanism (xeon E5s replaced by latest gen scalable and keeping things happy/running), should I be installing and adding Gold nodes one by one while managing host workload and transfer licensing host by host until the are all replaced/swapped out. Is there a better way? Any info, guides, links etc. greatly appreciated.
 

marcoi

Well-Known Member
Apr 6, 2013
1,534
295
83
Gotha Florida
ill preface with i am not a VMware cert anything, i just play with it in a home lab a lot. this is just a high-level idea of the top of my head

option 1 - local storage:
Purchase a few hard drives for the new dell servers and set them up as local datastore, get enough drives to cover your VM hosts. will be cheaper then new sans hardware and only used temporarily for move.

Setup the new dell servers under v7 vcenter. Since you need uptime, you can vmotion the VMs from the old servers/storage to new dell servers/local storage. There will be risk involved so make sure your backups are working correctly and you have backups ready just in case. (If you could do downtime, you could clone them and power off/on etc.)

Once you get all the VMs moved to new hardware decommission the old servers and repurpose the iscsi storage to the new servers and vmotion storage off local datastores to iscsi storage.

Upgrade vcenter to v8 then server hosts, then VM capability and tools etc. Not sure if this will incur downtime but you can vmotion VMs off one server to the other servers and then upgrade that one host and follow the same process until all servers are upto version 8.


option 2- if iscsi storage can be shared between old and new servers
Change the process to vmotion the VMs compute off the old servers to new servers and leave the storage on iscsi datastore, then follow the rest of the steps above.

Anyway, hope this at least gives you some ideas on how to pull this off.

good luck.
 

zachj

Active Member
Apr 17, 2019
172
115
43
Just upgrade vcenter to 8, install vsphere 8 on the new hosts, present the existing iscsi storage to the new hosts and configure the cluster with evc mode set to broadwell. Then exit maintenance mode on new hosts.


Vcenter 8 supports hosts running esxi 7 so you can have mixed host versions while you’re conducting the server swap.

alternatively you can build a brand new vsphere 8 cluster with a dedicated vcenter and then use cross-vcenter vmotion to migrate the vms. That may require a license that’s better than what you have, in which case I’d say you can just “move” the vms during your next scheduled maintenance window. Sure that requires downtime but since the underlying storage isn’t changing it would be mere minutes for each vm; just power off each vm on the old cluster, register the vm on the new vcenter and power it on again. You could even automate it. 99% of the downtime is simply however long it takes for the windows/linux guests to shutdown and boot back up.

regardless of how you migrate, don’t forget to update VMware tools on every VM. Don’t forget to set evc mode on the new cluster to skylake/cascade lake (depends on which Xeon skus your dell servers have) after you’re done migrating. And don’t forget to upgrade every single vm to at least hwv9 (which is required for all the guest os speculative execution vulnerability mitigations to work correctly)—this one is important since merely upgrading server bios and enabling mitigations in vsphere and patching the guest os isn’t sufficient for a few of the specter/meltdown issues; the guest can’t see the new x86 instructions on the physical cpu (from the patched bios) unless the vm runs hwv9 or newer.
 
  • Like
Reactions: tsteine and TRACKER