VSphere 8 - Storage vMotion operations per host (=2 vmotions max per host)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,682
1,806
113
Hi,

Has anyone found a way to override that stupid limitation that I only can move 2 VMs at the same time from one host to another?
Before they introduced this stupid limit I needed maybe 30 min for a full move, now its like 2 hours ...

(The vms all run on an all NVME or SSD, 100G backed TNC host with P4800x/P5800x slogs), I've ssen these perform much better.

Thankfully I dont do that too often any more but every time I do this is so annoying:(

Thanks
 

TRACKER

Active Member
Jan 14, 2019
286
119
43
2 VMs at the same time from one host to another? lol...i am still using ESXi 7 and VC8 and i never had such limitations.
Do you have link to documentation (or something) where that limitation is listed?
 

Rand__

Well-Known Member
Mar 6, 2014
6,682
1,806
113
2 VMs at the same time from one host to another? lol...i am still using ESXi 7 and VC8 and i never had such limitations.
Do you have link to documentation (or something) where that limitation is listed?

1736786102300.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,682
1,806
113
vmotion tag set on 1G interface? and not on 100?
Hm maybe the management interface also has vmotion, but the primary, secondary, tertiary vmotion nics should all be on 100, but let me double check
 

nabsltd

Well-Known Member
Jan 26, 2022
706
496
63
Has anyone found a way to override that stupid limitation that I only can move 2 VMs at the same time from one host to another?
Code:
vpxd.ResourceManager.costPerSVmotionESX6x
Each host has 16 units available for vMotion (regular and storage) the default for vpxd.ResourceManager.costPerSVmotionESX6x is 8, so that gives you a limit of 2 storage vMotions. Set vpxd.ResourceManager.costPerSVmotionESX6x to 4 and that should increase the number to 4.

You could also increase the total amount of resource units available per host, but that has the side effect of increasing the number of everything.

And, each datastore has a resource unit pool and a cost per motion type, but the defaults there are high enough that you likely aren't limited by them. And, the NIC has the same sort of resource pool and cost, but it looks like it is only for vMotion and not storage vMotion.

Documented as functional but not supported:

I believe that VMware's rationale for not adjusting host costs based on NIC speed is that a 100Gbit NIC is 100x as fast as a 1Gbit NIC, which means that any vMotion through the NIC should finish in 1% of the time. So, if you queue up 8x vMotion of the same "size" on a 1Gbit NIC, it will take 8 units of time to finish, regardless of whether you run 1, 2, or 8 at the same time. The same 8x vMotion on a 100Gbit NIC will finish in 0.08 units of time, again regardless of how many you run in parallel. By limiting to 2 simultaneous, you get less chance for timeout of an operation, and less contention with any other resources needed (CPU, etc.).
 
Last edited:
  • Like
Reactions: Rand__ and TRACKER

Rand__

Well-Known Member
Mar 6, 2014
6,682
1,806
113
Thanks, should have asked ages ago.

Not that it worked, but maybe it needs some time before it gets active, or need to restart vpxd or something.

1736792541338.png
 

nabsltd

Well-Known Member
Jan 26, 2022
706
496
63
Not that it worked, but maybe it needs some time before it gets active, or need to restart vpxd or something.
Changing most of the "advanced" values require a restart of vpxd.

Again, though, I don't think it will buy you anything unless the storage vMotion isn't coming close to saturating the network. If that's the case, though, then there's something else limiting the move.
 

Rand__

Well-Known Member
Mar 6, 2014
6,682
1,806
113
It works for powered down VMs, didnt work for active ones. Maybe another setting ?
Still better than before though :) Thanks
 

richardm

Member
Sep 27, 2013
47
16
8
Try raising vpxd.ResourceManager.maxCostPerEsx6xHost?

There's one group of settings to control the "cost" per migration. Another group of settings controls the migration "budgets." One can either lower the per-migration cost or raise the total budget.