Hi,
I am in the process of setting up 10Gbe on my network, the process hasn't been straight forward, but I'm poring over forums trying to get the best out of my All-in-One server. The trouble I'm having is with VMs on the same server only reading & writing at 1Gb/s, when physical connections to devices outside the server are closer to 10Gb/s.
Host is ESXi 7.0 U2, Storage VM is OmniOS r151040l with napp-it 21.06, sharing: NFS, SMB, iSCSI, 8x vCPU and 48GB ram.
I'm struggling with slow Windows 10 VMs backed on the NFS datastores, on a fast pool of 3x NVMe Z1, sync disabled. CrystalDiskMark reports c: drive r/w: 100MB/s only, which is much lower than expected.
The same VM backed on a single local SSD performs as expected CrystalDiskMark c: drive r/w:450MB/s (MX500-1TB)
During CDM benchmark on the NFS backed VM, OmniOS CPU load is low, Pool use is low, wait is low
iPerf3 from Win VM to OmniOS VM is 23Gb/s, which I would expect for vmxnet3 over software
iSCSI tests over physical network are very good with CrystalDiskMark MTU 9000: w:650MB/s & r:1050MB/s from targets on a range of pools (Optane 900p, MP510, HDD)
SMB 'reads' appear much slower than iSCSI (MTU 1500: w:700MB/s & r:300MB/s and MTU 9000: w:900MB/s & r:450MB/s), but this is another matter.
The OmniOS/napp-it 'base' tuning is applied and other NFS and SMB properties are set to defaults.
I have run out of ideas of what to adjust in ESXi (latest VMtools, ethernet0.coalescingScheme = disabled), but as I am getting good speed over a physical network, I assumed it must be something in ESXi.
Any thoughts welcome.
I am in the process of setting up 10Gbe on my network, the process hasn't been straight forward, but I'm poring over forums trying to get the best out of my All-in-One server. The trouble I'm having is with VMs on the same server only reading & writing at 1Gb/s, when physical connections to devices outside the server are closer to 10Gb/s.
Host is ESXi 7.0 U2, Storage VM is OmniOS r151040l with napp-it 21.06, sharing: NFS, SMB, iSCSI, 8x vCPU and 48GB ram.
I'm struggling with slow Windows 10 VMs backed on the NFS datastores, on a fast pool of 3x NVMe Z1, sync disabled. CrystalDiskMark reports c: drive r/w: 100MB/s only, which is much lower than expected.
The same VM backed on a single local SSD performs as expected CrystalDiskMark c: drive r/w:450MB/s (MX500-1TB)
During CDM benchmark on the NFS backed VM, OmniOS CPU load is low, Pool use is low, wait is low
iPerf3 from Win VM to OmniOS VM is 23Gb/s, which I would expect for vmxnet3 over software
iSCSI tests over physical network are very good with CrystalDiskMark MTU 9000: w:650MB/s & r:1050MB/s from targets on a range of pools (Optane 900p, MP510, HDD)
SMB 'reads' appear much slower than iSCSI (MTU 1500: w:700MB/s & r:300MB/s and MTU 9000: w:900MB/s & r:450MB/s), but this is another matter.
The OmniOS/napp-it 'base' tuning is applied and other NFS and SMB properties are set to defaults.
I have run out of ideas of what to adjust in ESXi (latest VMtools, ethernet0.coalescingScheme = disabled), but as I am getting good speed over a physical network, I assumed it must be something in ESXi.
Any thoughts welcome.
Last edited: