Not sure if this is the best forum.. but..
I have been testing my new FC home built arrays. I'm running ESXi 6.0 with a Fiber Channel (4x8Gbps) set up to a storage head. The storage head has a RAID 50 of two sets of 6 Intel DC3500's @ 300GB. I'm using a LSI9271-8i w/ 1GB Flash to connect to the enclosures (through the appropriate internal to external SAS converter). I have 24Gbps to each enclosure (just one connector each).
The maximum READ IOPS Queue Depth of 32 @ 4K is avg 183077, WRITE is 162203. I have been using Crystal DiskMark and ATTO, and they both give approximately the same results.
On pure sequential read, I get avg 2375MB/s and write is 2108MB/s.
Anyone else see something similar? I'm tuning for best performance, and have made a number of ESXi and Linux kernel changes on the FC side to get this far.
I have a NVMe PCIe card that's local on another machine, and it turns in similar results, so not sure if I'm just hitting the limit of VMware or not...
Thoughts?
I have been testing my new FC home built arrays. I'm running ESXi 6.0 with a Fiber Channel (4x8Gbps) set up to a storage head. The storage head has a RAID 50 of two sets of 6 Intel DC3500's @ 300GB. I'm using a LSI9271-8i w/ 1GB Flash to connect to the enclosures (through the appropriate internal to external SAS converter). I have 24Gbps to each enclosure (just one connector each).
The maximum READ IOPS Queue Depth of 32 @ 4K is avg 183077, WRITE is 162203. I have been using Crystal DiskMark and ATTO, and they both give approximately the same results.
On pure sequential read, I get avg 2375MB/s and write is 2108MB/s.
Anyone else see something similar? I'm tuning for best performance, and have made a number of ESXi and Linux kernel changes on the FC side to get this far.
I have a NVMe PCIe card that's local on another machine, and it turns in similar results, so not sure if I'm just hitting the limit of VMware or not...
Thoughts?