Looking for RAID Controller for SATA SSD RAID5 ESXi 7

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

botlike

New Member
Oct 3, 2021
3
0
1
Hello,

I'm trying to find a RAID Controller to use with my current home server. Goal is to have a single, fast storage with redundancy. I really don't know what specs, what controllers, etc to look for so I need some help here. First, that's my system:
AsRock Rack X470D4U2-2T
AMD Ryzen 7 1700
64GB DDR4-ECC (some Kingston with Micron E chips)

For storage, I purchased one NVMe SSD and combined it with some "leftovers". Therefore, I got:
- WD Blue SN550 1TB
- Samsung 840 Evo 120GB
- Samsung 750 Evo 500GB

Due to limitations with my mainboard, that NVMe can only use 2 lanes, so it's limited to around 1750 MB/s (that speed is fine tho, not looking to max out that). I already purchased some SanDisk Ultra 3D 1TB drives to put into that planned array.

A controller with 4i would be okay, but I think I want 8i with SATA 6Gb/s support to not limit the SSDs here. What controllers are suitable for me?

Botlike


// EDIT:
Yes, I'm fine with used hardware.
I did some more research and found the "Lenovo 530-8i" as a default recommandation. Does it work in non-Lenovo systems?
I remember buying a Dell Perc H200 (or something) which refused to be recognized. I needed to isolate 2 pins on the PCIe-slot to make it work. I then flashed it to HBA because that's what I intended to do anyways. After flashing, I didn't need to isolate those 2 pins anymore. How about the Lenovo card tho?
 
Last edited:

mphelpsmd

New Member
Dec 22, 2020
4
0
1
Baltimore, MD USA
I know you mention the desire to have a hardware RAID card, but I'd like to propose an alternative: Software RAID.

Just use your motherboard's 6 SATA3 connectors for the SanDisk drives, and you can also add the WD Blue to it. (The 500 GB and less drives will be less useful to you in a RAID array, considering your other drives are 1 TB.)

All modern operating systems (such as Windows and Linux) have very good software RAID. It's fast, and enormously flexible. Unlike a hardware solution, you can move the drives easily to another system and re-assemble the RAID. In the future, you can get an external enclosure to expand your RAID system and still use the same software RAID over an SAS controller. (This is actually what I do; I have a Linux server connected to an external array using software RAID6.)

This will give you plenty of speed, along with an inexpensive, flexible, and scalable solution.

Michael
 

botlike

New Member
Oct 3, 2021
3
0
1
Thanks for your reply. I do know about software RAID and actually use it in some of my other servers. But on ESXi, there is no software RAID. So I have to use a supported RAID Controller.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
An option would be a ZFS storage VM with HBA or raw disk passthrough. This gives you a barebone alike storage performance, the superiour security and options of ZFS like unlimited number of long standing snaps and backup, restore, move via ZFS or SMB via Windows previous versions. You use NFS to access the storage via ESXi.

You should use a minimalistic enterprise/storage OS with a low resource need like OmniOS (Solaris fork, OmniOS Community Edition) with a native ZFS environment. I published this solution 2008 and offer a ready to use free server template since, https://napp-it.org/doc/downloads/napp-in-one.pdf
 

botlike

New Member
Oct 3, 2021
3
0
1
@gea Thanks for your reply. Unfortunately, that's not a solution for me for two reasons:

1. I don't like passing through hardware for storage and then export it to make it usable for ESXi. It always causes problems when ESXi is booting up. (Especially for single drives where I lose the ability to monitor them using S.M.A.R.T. which is really important to me.) Either way, storage wise I would be bound to the maximum of what ESXi supports as internal network which is (afaik) 10 GBit/s. Just a single NVMe is already faster than that. I'm not looking for maximizing speed, but that would be a downgrade to my current NVMe already. NFS also is not made for that, so I'd rather use ISCSI - but again, 10 GBit/s is not what I want.

2. ZFS. I'm not really sure how mature ZFS on Linux is right now. I used it like 6 years ago and I was not happy (at least not with Linux). People told me it's better now but compared to other software solutions, it doesn't really support expanding. (I know there's word on that it is supported now. But not all distributions have it already and even if - it's not mature enough for me. Stability is still key!)

I really appreciate you guys giving me alternative solutions but I've already set my mind to that approach. I know your ideas are not bad but it's just not what I am looking for.

If ESXi supported Software RAID natively, that's what I would choose but that's not possible.