[Win2022] STOP in vmswitch.sys adding VMNetworkAdapter to host

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DavidRa

Infrastructure Architect
Aug 3, 2015
329
151
43
Central Coast of NSW
www.pdconsec.net
Playing with Server 2022 in the lab here, and I'm trying to set up a Hyper-V cluster on my DL380 G8. The first node went fine.

All the other three nodes have the same problem - OS installed, firmware, drivers and HP niceties added fine. When I create the VMSwitch - no error (see screenshot).When I go to create the VMNetworkAdapter, instant bluescreen in vmswitch.sys:

1633091708065.png

1633091737220.png

Anyone have any suggestions? The VMSwitch survives the crash, but is in the same state afterwards (instant STOP error when I add a new VMNetworkAdapter).

The computer has rebooted from a bugcheck. The bugcheck was: 0x0000003b (0x00000000c0000005, 0xfffff8001b041623, 0xfffffb8d313f9790, 0x0000000000000000). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 37077a0d-262e-497d-9ad3-ded2e241edfb.
 
Last edited:

Terry Wallace

PsyOps SysOp
Aug 13, 2018
197
118
43
Central Time Zone
If the first node went fine, first thing I'd do it run the iip (intelligent provisioning profile utility from bios boot) And choose update firmware
Not specifically to update it. but to compare versions between the machines, both bios and especially the NIC firmware..
See if your first node is on an older or new version compare to the 3 with issues.
Also check to see if the NIC's are even the same.. You didn't specify if you are running 380e or p versions. So you may have LOM modules or just cards. And HP ships nic's with intel/broadcomm/mellanox chips depending on the models.

Also check and see if ilo's management port is set to shared or dedicated on both etc...

Also as an aside HP is not qualifying anything less than Gen10 to run 2022. (Although that doesn't mean it wont run it obviously, but it does mean there wont be any server specific fixes coming from hp for stuff the like rmrr regions that the gen8's had )

p.s. Im running a proxmox cluster of dl380p's and e's (14 nodes) and have seen alot of weird stuff setting that up :) But haven't looked at 2022.. to new for me yet LOL

 
Last edited:
  • Like
Reactions: ecosse

DavidRa

Infrastructure Architect
Aug 3, 2015
329
151
43
Central Coast of NSW
www.pdconsec.net
Yeah I realise the Gen8 isn't "qualified" but I'm not spending $20K $30K(!) to get Gen10 nodes for my home lab. They're DL380p and identical specs apart from RAM distributions - some are 24 x 8, others 8 x 16 + 8 x 8. Really the only difference between p and e versions is that the p was pre-populated with all the fans, if memory serves, so nothing significant there.

The nodes seem to have the same firmware versions across the board, at least according to the Intelligent Provisioning update (that's current, 1.74.2, and yet wants to downgrade the NIC firmware - just a few minor versions, nothing significant).

I have however discovered the issue is only with the vSwitch attached to the 530SFP+ - I created a separate vSwitch for the 1Gb LOMs on one of the problem nodes and it worked immediately, so it looks like the problem is isolated to the 10Gb Broadcoms. I hate Broadcom NICs (10Gb Broadcom NICs in the Dell M630 blades from 2011 rewrote all MAC addresses for all VMs to the physical MAC, then failed to release the guest IP - host MAC mappings during live migration - so LM broke every VM). iLO is dedicated because ... well, 1Gb ports are plentiful in any sane lab.

Oh right - as for "too new" - now is the time to learn things when you're in consulting space. Can't learn it after customers are asking why it's not built yet.
 
Last edited:

Terry Wallace

PsyOps SysOp
Aug 13, 2018
197
118
43
Central Time Zone
Sorry to hear that.. I actually swapped all my Broadcom LOM's out for Mellanox LOM's becuase I had nothing but issues with them.
The big difference between e & p series is 380e series runs the e5-2400 cpu's the 380p series runs the e5-2600's . 1311 vs 2011 sockets. (if I recall my pin counts correctly)
 

amichel

New Member
Oct 11, 2021
1
0
1
This looks like a bug with Intel specific NIC'S on Windows Server 2022. I have two different Servers, on both I have Intel Dual Port Nics different Models.
On both Systems if I create a SET Switch it ends up with a BSOD.
Using Broadcom or even Realtek NICS it works like a charm.