Was wondering if anyone has seen an issue with AMD Epycs and ESXi and the scheduler not using the 3rd and 4th nodes of the Epyc CPUS ? Ive created several VMs with 4/8 cores and with 10-12GB (a bit less specs than a NUMA which would be 4 cores / 16gbram) however if I do esxtop I just see in the NHM column 1, 2 or both (in case of the freenas vm which has 24gb ram assigned to it)
Ive tried to have a VM manually run on specific NUMAs by setting the advanced parameter of a VM
BIOS seems to be configured correctly, or at least it exposes the NUMA nodes to the ESXi 6.7
[root@esxi:~] esxcli hardware memory get | grep NUMA
NUMA Node Count: 4
The main thing I want to acomplish with this is being able to run 2 VMs (one of each of the other nodes) to run the XMR miner and dont have any performance hit on the VMs running on the other 2 NUMA nodes
Current setup:
CPU AMD Epyc 7351P
Mobo Supermicro H11SSL-i
4 x 16GB DDR4 ECC RAM
Hope someone can share some info if im doing something wrong or missing something.
Thanks!
Ive tried to have a VM manually run on specific NUMAs by setting the advanced parameter of a VM
- cpuid.coresPerSocket = 1
- numa.vcpu.preferHT = TRUE
- numa.nodeAffinity = 0 (tried changing it to 1,2,3 too and it always shows 1 or 2 on esxtop NHM colun)
BIOS seems to be configured correctly, or at least it exposes the NUMA nodes to the ESXi 6.7
[root@esxi:~] esxcli hardware memory get | grep NUMA
NUMA Node Count: 4
The main thing I want to acomplish with this is being able to run 2 VMs (one of each of the other nodes) to run the XMR miner and dont have any performance hit on the VMs running on the other 2 NUMA nodes
Current setup:
CPU AMD Epyc 7351P
Mobo Supermicro H11SSL-i
4 x 16GB DDR4 ECC RAM
Hope someone can share some info if im doing something wrong or missing something.
Thanks!