VMs MUCH Slower on all-in-one?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

marcd

New Member
Mar 5, 2012
2
0
0
got all-in-one up and going no problem- then I ran some speed tests on VMs on local storage versus ones on the all-in-one NFS
X8dti-f Motherboard
Processor - 2 x Intel Xeon Processor E5620
72 GB Ram
LSI Logic Controller Card SAS 9211-8i 8Port 6Gb/s IT mode
Openindiana 64 bit with 12 GB RAM

I have been copying over a small VM all over the place and running HD Tune - here are some results (1 of about 30 I have run). The take away is that NFS mounted VMs perform slower than local storage

WD Caviar Black Direct Attach to MB READ (SATA II WD Caviar Black HD 500 GB)
HD Tune Pro: VMware Virtual IDE Hard Drive Benchmark

Test capacity: full

Read transfer rate
Transfer Rate Minimum : 77.3 MB/s
Transfer Rate Maximum : 101.9 MB/s
Transfer Rate Average : 97.4 MB/s
Access Time : 8.50 ms
Burst Rate : 137.1 MB/s
CPU Usage : 11.3%


WD Caviar Black Direct Attach to MB WRITE

HD Tune Pro: VMware Virtual IDE Hard Drive Benchmark

Test capacity: full

Read transfer rate
Transfer Rate Minimum : 77.3 MB/s
Transfer Rate Maximum : 101.9 MB/s
Transfer Rate Average : 97.4 MB/s
Access Time : 8.50 ms
Burst Rate : 137.1 MB/s
CPU Usage : 11.3%

ZFS Mirror SSD read and SSD Write Cach READ (This is array of 6 of the exact same HDs as above with 2X 64GB SATA II SSD as read and write cache- u don't even want to know how bad it was without cache)

HD Tune Pro: VMware Virtual IDE Hard Drive Benchmark

Test capacity: full

Read transfer rate
Transfer Rate Minimum : 62.4 MB/s
Transfer Rate Maximum : 184.4 MB/s
Transfer Rate Average : 136.7 MB/s
Access Time : 7.85 ms
Burst Rate : 184.4 MB/s
CPU Usage : 13.5%

ZFS Miror SSD read and SSD write cache WRITE
HD Tune Pro: VMware Virtual IDE Hard Drive Benchmark

Test capacity: full

Write transfer rate
Transfer Rate Minimum : 2.9 MB/s
Transfer Rate Maximum : 20.4 MB/s
Transfer Rate Average : 12.6 MB/s
Access Time : 6.02 ms
Burst Rate : 184.3 MB/s
CPU Usage : 1.6%


The write speed is just awful. And this is the best I could do on read speed - 6 drives in mirrored ZFS pool. RaidZ was no good at either.

I started this quest because my VM in actual workload (Small SQL Server install) was performing horribly - about 2x as slow as when mounted on local storage direct attach to MB.

Couple questions

1. Is this normal? Should I expect all-in-one NFS mounted VMs to perform much slower? I can't afford 500GB Sata III SSDs yet.
2. How do I accurately measure HD access with VMs? What do people normally do? I just copied over a small VM and ran HD Tune.

Thank.s Marc
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
you should not compare it like this.

You can only compare a dedicated SAN vs a virtualized SAN (All-In-One)
But I suppose, your problem is sync write. Esxi shared storage over NFS use always sync write what means
that every data-write-request needs to be commited until the next one can occur.

Without sync-write, alle writes are going to RAM where they are collected and done optimized after some seconds.
You can check, if you disable sync property on your NFS folder. There are reports of 100x better values afterwards.

If you disable, some seconds of last write data can be lost on a power failure.
With ZFS you can use a SSD or better RAM Log-device for best data security and performance.
But first disable sync for tests.
 
Last edited:

marcd

New Member
Mar 5, 2012
2
0
0
Thanks-yes you are very right- not really a good comparison to test. I have always gotten terrible results on NFS versus direct or iSCSI storage with VMs. Testing on the all-in-one gets rid of one variable- the network hardware. I will continue to test and mess around.