ESXI and RAID 1 in IR Mode - Which to Get?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mda

New Member
Mar 18, 2021
4
0
1
Hi All,

I'm looking at purchasing an LSI/Avago/Broadcom RAID card for a development server that will be running ESXI 6.5/6.7

I currently have an LSI 9211 with 2 Samsung 860 SSDs in RAID 1.

The setup does work, but for some reason, performance is very bad apart from sequential reads compared to running the drives on standalone with my AMD X470 board's SATA ports.

Running a basic CrystalDiskMark on 2 identical VMs show a big difference when running on the motherboard SATA vs the LSI 9211 RAID1

Also, restoring a 50GB MySQL table is taking far too long compared to another bare metal MySQL install I have on hand.

I'm guessing it's either:

1. the LSI 9211 driver that ESXI is using
2. the 9211 itself

Running the 9211 on a Windows 10 build shows only a minimal performance loss vs the onboard SATA (measured with CrystalDiskMark) which leads me to believe that the 9211 is just not optimized with ESXI.

Question: Which HBA card should I be getting to get decent (not necessarily to max out) my SSD speeds? Is getting branded ones / Lenovo 530i/930is going to be a problem?

Other relevant specs:
CPU - AMD 2700X
Board - Gigabyte X470
RAM - 64GB
GPU - Nvidia GT710
RAID Card - LSI 9211 IR P20 (in the CPU x8 slot)

While this may be an ESXI related post, I do believe this should still be in the HBA/RAID card subforum...
Dear Mods, kindly move this is if this is not the case :]

Thank you so much!
 
Last edited:

mda

New Member
Mar 18, 2021
4
0
1
What stripe size are you using for the raid 1?
>vmfs uses 1 MByte chunk sizes (VMware Knowledge Base)
>ntfs uses 4 KByte chunk sizes (Default cluster size for NTFS, FAT, and exFAT)
Hi, thank you for your reply.

Unfortunately, the 9211 in IR mode does not have this option. I looked in the the RAID Config Utility you can access by hitting CTRL+C before the system boots -- nowhere to change the strip size, nor did I have the option to set the stripe size when I first created the RAID 1.

In any case, what should I be using? It is odd that VMs on this datastore show very bad performance, while the speed on Win10, does not seem so bad.

I'm currently on the latest firmware/nvdata for the 9211
v 7.39.02.00
20.00.07.00 - IR (P20)
14.01.00.09

Maybe this RAID card is best suited as a 'dumb' (pardon the term) HBA?


Attached is an image of a fio command ran on identical CENTOS VMs showing the large disparity in speed:
The 850 EVO was faster by 4-5 times compared to the 860 EVO RAID 1

Here are two CentOS Disk Benchmarks.

Right - LSI 9211 RAID 1
Left - Software RAID with mdadm using two ESXI Hard disks on two different datastores
 
Last edited:

mda

New Member
Mar 18, 2021
4
0
1
So this is in RAID 10 with 2 Crucial MX500 1TBs and 2 860 EVO 1TBs on a VM -- the only VM running on the machine... The read speeds have definitely improved. Why have the write speeds remained bad?

Maybe the PCIE 2.0 is limiting the bandwidth I can write?

A Disk write benchmark on CentOS is also showing the same impressive reads but bottlenecked write speeds

I have tried this on two separate LSI 9211's and separate SAS <-> SATA cables on two different machines (one, a 2700X, another a 3700X and different brand X470 boards).

Not sure what's the issue at this point.

Hope someone would have more insight on this. That said, I do have a relatively more modern 9440-8i on the way. Hoping that will fix things.

Thank you!
 

Attachments

Last edited: