Maximum IOPS out of a VM?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

wardtj

Member
Jan 23, 2015
91
28
18
47
Not sure if this is the best forum.. but..

I have been testing my new FC home built arrays. I'm running ESXi 6.0 with a Fiber Channel (4x8Gbps) set up to a storage head. The storage head has a RAID 50 of two sets of 6 Intel DC3500's @ 300GB. I'm using a LSI9271-8i w/ 1GB Flash to connect to the enclosures (through the appropriate internal to external SAS converter). I have 24Gbps to each enclosure (just one connector each).

The maximum READ IOPS Queue Depth of 32 @ 4K is avg 183077, WRITE is 162203. I have been using Crystal DiskMark and ATTO, and they both give approximately the same results.

On pure sequential read, I get avg 2375MB/s and write is 2108MB/s.

Anyone else see something similar? I'm tuning for best performance, and have made a number of ESXi and Linux kernel changes on the FC side to get this far.

I have a NVMe PCIe card that's local on another machine, and it turns in similar results, so not sure if I'm just hitting the limit of VMware or not...

Thoughts?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I think you're more likely at the limit/restriction of the benchmarking tools and possibly QD32 than ESXI itself.. I don't know about the raid config/card to suggest changes/limitations of it.

Have you tried IOMeter or FIO?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I know several years ago 1 million IOPS was touted out of a single VM by VMware so I'd imagine it's even higher now provided you have proper infra backending it.