VMUG Advantage - Nested lab on single ESXi host: architecture for NSX and vSan

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
I have an Epyc 7452 (32 cores) on a Supermicro H12SSL-i with 128GB DDR4 3200 memory and ESXi 8.0U2 installed fine and joined vCenter installed as a VM in the default vSS management portgroup.

I want to install whatever is required to simulate a SDDC and cloud infrastructure, including NSX and vSan. I'm entirely new to the VMware platform but my background is HP C7000 Bladesystems, some HP Virtual Connect and Cisco Catalyst networking from over ten years ago.

My initial thought is to install NSX and vSan directly into VMs on the bare-metal ESXi host alongside vCenter and treat this level as the control/management of all the nested esxi infrastructure. Is this a recommended approach for a nested lab?

I'm figuring that NSX can be used to isolate the lab and vSan can present to the nested ESXi hosts using NVME ESA.

I have a 1TB Kingston KC3000 as boot/vmstore for the bare-metal ESXi and would like to pass a second 1TB KC3000 through to the vSan VM to be used as VM store for all nested infrastructure. However, this seems like it will limit this vSan install to a single node, as I cannot split this KC3000 into one NVME namespace for each vSan node in a cluster, or install one SSD per vSan node.

Just a note on NVME namespace SSDs - the Samsung PM9A3 series includes an M.2 variant and this would likely do the trick if I need to eventually cluster vSan though not sure if namespaces can be passed through.

Any thoughts on this approach would be most welcome as I haven't been able to find material that breaks down the technology in a way that can be understood for nested lab scenarios. Cheers.
 

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
There is an official vmware process for an automated setup of a VMware Cloud Foundation nested lab called VMware Lab Constructor (VLC) and it seems to do a good job minimising the footprint of VCF which includes NSX and vSan. However, part of installing these components manually is learning about how vSphere works so I'll leave VLC for later when I need to focus on more advanced lab work. The great thing is I can review VLC to understand the components and how to manually construct the environment.
 

zachj

Active Member
Apr 17, 2019
159
104
43
William lam has plenty of blog posts on how to do this on a NUC and specifically how to skinny down the management vms (128gb won’t go that far given how fat things like max and vcenter are).
 
  • Like
Reactions: nickwalt

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Just upgraded the Epyc server by replacing the 4 x 32GB modules with 4 x 64GB modules, with another 2 x 64GB modules being added in a couple of weeks to take it to 384GB. After that I'll add another 2 x 64GB to max out the eight slots to 512GB DDR4 2933.
 

zachj

Active Member
Apr 17, 2019
159
104
43
I maxed out my h12ssl with 8x64gb rdimms and honestly I regret because to get more capacity than 512gb I have to replace the whole lot with lrdimm :-(

if I had it to do over again I’d have done lrdimm from the start.
 

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Yeah, it is a problem. However, the price jump to 128GB modules is high. 512GB memory is good for a 32 core Epyc.

Depending on how complex the Cisco labs get I may end up with a Supermicro H11SSL-i (PCIE 3.0) or Tyan S8030GM2NE (PCIE 4.0) motherboard, Epyc Rome, and 256GB DDR4 2666MHz modules running a bare metal installation of gns3 or pnetlab — in addition to the current server.

Apparently, ESXi cannot run these large scale labs efficiently enough and a bare metal installation is required.
 

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Coming back to nested installations of ESXi running vCentre 8, NSX 4.1 and vSan 8 I will test out with the standard vm volume then consider used data centre SSDs with NVME namespaces that can be passed through to the ESXi vSan ESA VMs.
 

zachj

Active Member
Apr 17, 2019
159
104
43
Thats exactly what im going to do. Ive got the drives already in the mail. Just waiting for delivery.

Also got a fistful of optane pmem (im trying it on my xeon box not my epyc box).

its not 100% clear to me if namespaces can be passed through because some drives support sriov and namespaces and some support namespaces only. Im not quite grasping why a drive would need to support both if namespaces can be passed through—at that point wouldnt namespaces and sriov be functionally redundant?
 
  • Like
Reactions: nickwalt

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Yeah. SR-IOV - being an older technology and not specific to storage and NVME does seem to be redundant. It could be that these drives are made to provide flexible support for customers who might juggle them across different product installations. NVME and it's integration into modern CPU and PCIE tech would be my first choice for standalone ESXi and vSan applications. The tech simplifies so much and brings incredible performance. Mind blowing how well CPU, PCIE and NVME work together.

The NUMA design in Epyc adds to this interesting set of interconnection technologies.
 
Last edited: