Chosing a RAID card for H11DSi build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

compgeek89

Member
Mar 2, 2019
56
32
18
I have an H11DSi-NT with a CSE-826 containing a BPN-SAS3-826EL1 backplane. It also has the two rear 2.5in bays/backplane. I'd like to run NVME for boot stuff and then use the front 12 bays for storage space, probably with more than one array if possible. I have used Dell PERC stuff at work but am not all that familiar with the tech space. I know LSI seems to be the big player for add-in cards. Any suggestions? Also, what will I need to support multiple NVME drives in RAID -- I know the board has a single NVME slot, what would be the best way to add space for more?

Appreciate the help!
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
In order to provide a better answer, could you share the following info if possible:

  1. How will you be using the server? (Ex: NAS, VM Host, ?)
  2. What OS(s) do you plan to run on the bare metal, and in VMs, if any? (Ex: ESXi, Proxmox, Windows)
  3. What types of things will you be storing? (Ex: Movies, music, VM disks, linux ISOs?)
  4. What will the I/O workload look like (Ex: Write-intensive, mostly sequential, random I/O from VM boot drives?)
  5. What kind of 3.5" drives do you want to run? (Ex: SAS, SATA)
  6. What is your budget like?
Some of those EPYC systems support NVMe RAID natively, and will allow you to create a bootable mirror. That board has 2x Oculink connectors which I believe are PCIe and should support NVMe SSDs.

The hardware you've got has a lot of potential, 12Gbps SAS, etc... But in order to truly unlock the performance of that system, careful planning and part selection will be needed to support your workload.
 
  • Like
Reactions: compgeek89

compgeek89

Member
Mar 2, 2019
56
32
18
In order to provide a better answer, could you share the following info if possible:

  1. How will you be using the server? (Ex: NAS, VM Host, ?)
  2. What OS(s) do you plan to run on the bare metal, and in VMs, if any? (Ex: ESXi, Proxmox, Windows)
  3. What types of things will you be storing? (Ex: Movies, music, VM disks, linux ISOs?)
  4. What will the I/O workload look like (Ex: Write-intensive, mostly sequential, random I/O from VM boot drives?)
  5. What kind of 3.5" drives do you want to run? (Ex: SAS, SATA)
  6. What is your budget like?
Some of those EPYC systems support NVMe RAID natively, and will allow you to create a bootable mirror. That board has 2x Oculink connectors which I believe are PCIe and should support NVMe SSDs.

The hardware you've got has a lot of potential, 12Gbps SAS, etc... But in order to truly unlock the performance of that system, careful planning and part selection will be needed to support your workload.
Thank you so much for taking the time to ask!

Ultimately the server will be for home lab experimentation, and the final configuration will probably depend on how that goes. But the current thought:

1. VM Host
2. Probably ubuntu server with Proxmox? I havent used either yet, but dont want to use Windows, which is what Im most familiar with. And Im on a budget.
3. Storage space is probably going to be primarily for my pictures and videos and other personal data, basically a NAS-like usage.
4. Probably a mix, might have VM boot drives and basic file storage on separate arrays.
5. Hoping to be able to run either SATA or SAS depending on what I have on hand. I will probably have both an SSD and a spinning disk array, in addition to the NVME array.
6. Budget is somewhat tight, so any extra money spent would need to have good technical merit.

Let me know if you have any other questions or need clarification.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
Ok is your backplane the version that has NVMe support (The N4 version): https://www.supermicro.com/manuals/other/BPN-SAS3-826EL1-N4.pdf

Or is it a variant without NVMe support?

This is important, because if your backplane does not support NVMe, even if you were to buy an expensive NVMe RAID card, there'd be nowhere to physically plug/mount the drives. I don't believe your motherboard supports native NVMe RAID. (Whatever the AMD equivalent is of VROC)

So, how bad do you want NVMe RAID, if your backplane and motherboard don't support it? About the only option would be a card like this: https://www.amazon.ca/Highpoint-SSD7101A-1-Dedicated-32Gbps-Controller/dp/B073W71K4Z?th=1

Once I know your feelings about this I'll be able to suggest a solution for you.
 
  • Like
Reactions: compgeek89

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
What are you actually intending to do with the storage? Because without Windows you can basically do storage redundancy with ZFS and you just need a HBA for that, not a RAID controller. And with NVMee an HBA is essentially not even a thing, that's just the PCIe bus.

So essentially, if you are going to run Linux or BSD or Solaris, you just need to have enough connectivity for the drives and enough cores on the CPU.
 
  • Like
Reactions: compgeek89

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
@oneplane - This was posted earlier in the thread. He's going to have a combination of NAS (music, etc) and VM boot drives, in an experimental homelab environment.

What I am trying to decide is whether to recommend an expensive tri-mode (SAS/SATA/NVMe) RAID card like the LSI 9400 series, or the much cheaper SAS-SATA RAID cards like the LSI 9362. If his backplane supports NVMe, it may be worth the extra expense, as he specifically requested RAID for all 3 disk types.

This is why I asked for clarification on the backplane model, because a google search reveals there are variants of the BPN-SAS3-826EL1 with and without NVMe support.

Thanks for the suggestion of ZFS. This is a good option. It is more complex though, and will not allow experimenting with ESXi on the bare metal while maintaining storage redundancy for VM boot drives.

I was also going to consider recommending an Optane M.2 SSD for the hypervisor OS, instead of storing it on the array. They have superb endurance and reliability in lieu of RAID. Keeping your hypervisor boot drive outside of your RAID card's arrays has some advantages.

I run ESXi, it's the gold standard for on-prem in the enterprise, and learning how to use it is one of the most valuable skills you can get from a homelab. But I know Proxmox, Unraid, xcp-ng and such are popular around here too.
 
  • Like
Reactions: compgeek89

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
@DavidWJohnston I was looking for redundancy/integrity/availability for the storage; if it's hot storage vs. cold storage there would be something to be said for either solution.

As for what would make sense for the entire setup, I'd say the classic single-box-zfs-hypervisor would be the most useful. While virtualisation at the office might certainly be an interesting factor, I'm not smelling any of the classic signs of 'need it to level up' ;-) I'd say that knowing the concepts of virtualisation, block storage and networking gets you 75% of the way there, and the last 25% will be specific to any on-prem, cloud or managed colocation setup. Around here, on-prem is not very popular, generally we're seeing dismissal of one Vmware/Hyper-V engineer per quarter, with the target of having 2 remaining to keep an archive running for 2 to 5 years (cold VM storage for stuff that was migrated away). They are essentially going the same way as the DBA (but not all DBAs are the same, I'm talking about the RDBMS herders, not the application managers ;-) ).

Spending money on disks and CPU and memory would be my priority here, depending on how the data is handled (i.e. if there is backup, since block-level redundancy, regardless of the implementation is not backup). I'd go for either one (soft)raid1 for the hypervisor and passing all disks to a ZFS VM, or for Proxmox-on-ZFS with no (soft)raid at all, only raidz2/raidz3 depending on the vdev size.

As for hardware RAID in general, I've found that they are practically pointless, save for two scenarios:

1. You are using Windows and no hypervisor, on a single node, with no external storage (DAS, NAS, SAN) for some reason
2. You are using some sort of certified/packaged/externally required stack that doesn't allow you to nuke the raid controllers

In pretty much all other scenario's you'll either have a SAN, or you'll use software storage management. The SAN in itself might of course internally do RAID-y things, be it controller redundancy with LUN virtualisation (instead of full multi path), FCoE emulation, or some other shady stuff where some work is being done to abstract the disks away. I'm generally treating it as a black box (and since most of them are delivered and contracted that way... it's not really a choice either) that uses internal magic to do the block replication/redundancy/availability for me. Technically, we could probably argue that a COTS SAN has at least some classic RAID component to it.
 
  • Like
Reactions: compgeek89

compgeek89

Member
Mar 2, 2019
56
32
18
Ok is your backplane the version that has NVMe support (The N4 version): https://www.supermicro.com/manuals/other/BPN-SAS3-826EL1-N4.pdf

Or is it a variant without NVMe support?

This is important, because if your backplane does not support NVMe, even if you were to buy an expensive NVMe RAID card, there'd be nowhere to physically plug/mount the drives. I don't believe your motherboard supports native NVMe RAID. (Whatever the AMD equivalent is of VROC)

So, how bad do you want NVMe RAID, if your backplane and motherboard don't support it? About the only option would be a card like this: https://www.amazon.ca/Highpoint-SSD7101A-1-Dedicated-32Gbps-Controller/dp/B073W71K4Z?th=1

Once I know your feelings about this I'll be able to suggest a solution for you.
So, I am not next to the chassis at the moment to check, but I don't believe the backplane is the -N4 model. However, I thought I had heard that the two rear 2.5 bays supported NVME? Second guessing myself on that now, though. Perhaps you would have a better idea.

And the answer regarding "how bad do I want it" would be... not hundreds of dollars bad!

What are you actually intending to do with the storage? Because without Windows you can basically do storage redundancy with ZFS and you just need a HBA for that, not a RAID controller. And with NVMee an HBA is essentially not even a thing, that's just the PCIe bus.

So essentially, if you are going to run Linux or BSD or Solaris, you just need to have enough connectivity for the drives and enough cores on the CPU.
I am not 100% sure what I will end up with running at the base, Proxmox, ESXi and Linux are all in play at this point.


@oneplane - This was posted earlier in the thread. He's going to have a combination of NAS (music, etc) and VM boot drives, in an experimental homelab environment.

What I am trying to decide is whether to recommend an expensive tri-mode (SAS/SATA/NVMe) RAID card like the LSI 9400 series, or the much cheaper SAS-SATA RAID cards like the LSI 9362. If his backplane supports NVMe, it may be worth the extra expense, as he specifically requested RAID for all 3 disk types.

This is why I asked for clarification on the backplane model, because a google search reveals there are variants of the BPN-SAS3-826EL1 with and without NVMe support.

Thanks for the suggestion of ZFS. This is a good option. It is more complex though, and will not allow experimenting with ESXi on the bare metal while maintaining storage redundancy for VM boot drives.

I was also going to consider recommending an Optane M.2 SSD for the hypervisor OS, instead of storing it on the array. They have superb endurance and reliability in lieu of RAID. Keeping your hypervisor boot drive outside of your RAID card's arrays has some advantages.

I run ESXi, it's the gold standard for on-prem in the enterprise, and learning how to use it is one of the most valuable skills you can get from a homelab. But I know Proxmox, Unraid, xcp-ng and such are popular around here too.
If the NVME raid is too painful, a single M.2 would probably be sufficient for my use case at this point as long as I can get good backups. The Optane is a good option to keep on the table depending on how the above work out.



@DavidWJohnston I was looking for redundancy/integrity/availability for the storage; if it's hot storage vs. cold storage there would be something to be said for either solution.

As for what would make sense for the entire setup, I'd say the classic single-box-zfs-hypervisor would be the most useful. While virtualisation at the office might certainly be an interesting factor, I'm not smelling any of the classic signs of 'need it to level up' ;-) I'd say that knowing the concepts of virtualisation, block storage and networking gets you 75% of the way there, and the last 25% will be specific to any on-prem, cloud or managed colocation setup. Around here, on-prem is not very popular, generally we're seeing dismissal of one Vmware/Hyper-V engineer per quarter, with the target of having 2 remaining to keep an archive running for 2 to 5 years (cold VM storage for stuff that was migrated away). They are essentially going the same way as the DBA (but not all DBAs are the same, I'm talking about the RDBMS herders, not the application managers ;-) ).

Spending money on disks and CPU and memory would be my priority here, depending on how the data is handled (i.e. if there is backup, since block-level redundancy, regardless of the implementation is not backup). I'd go for either one (soft)raid1 for the hypervisor and passing all disks to a ZFS VM, or for Proxmox-on-ZFS with no (soft)raid at all, only raidz2/raidz3 depending on the vdev size.

As for hardware RAID in general, I've found that they are practically pointless, save for two scenarios:

1. You are using Windows and no hypervisor, on a single node, with no external storage (DAS, NAS, SAN) for some reason
2. You are using some sort of certified/packaged/externally required stack that doesn't allow you to nuke the raid controllers

In pretty much all other scenario's you'll either have a SAN, or you'll use software storage management. The SAN in itself might of course internally do RAID-y things, be it controller redundancy with LUN virtualisation (instead of full multi path), FCoE emulation, or some other shady stuff where some work is being done to abstract the disks away. I'm generally treating it as a black box (and since most of them are delivered and contracted that way... it's not really a choice either) that uses internal magic to do the block replication/redundancy/availability for me. Technically, we could probably argue that a COTS SAN has at least some classic RAID component to it.
I've got two EPYC 7551s and 128GB RAM, at the moment. Running something like Proxmox on ZFS definitely has potential, though I am not sure if that will be what I want in the end or not. Basically I want to be able to spin up VMs for a variety of use cases at-will (NAS, Firewall, DC, etc.) depending on what I want to do, so I want a good platform for that. At the office, we have Win Server DC 2022, so we just spin up Hyper-V VMs any time we need something. I want that flexibility, but on Linux, with minimal cost (e.g. if I do ESXi, I'm doing the free version).

Greatly appreciate all the input. I am thankful I asked here, because there's clearly a ton I don't know and haven't considered!
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I think the free version of ESXi is dead in the water, but Proxmox will do everything you want, including ZFS. As long as you get the connectivity for the backplane sorted (HBA, PCIe or otherwise) you'll have that covered.

Depending on what disk layout you are interested in, it might be possible to put Proxmox on some low-cost SATA SSDs in RAID1 (software RAID - easy to setup on the installer) and not use any storage disks for that. It won't really impact your performance anyway, but with the benefit of being able to boot even if your hypervisor storage gets sad it's an easy $50 to spend. Then putting all the other disks in a big pool will give you maximum flexibility; you can make as many virtual machines, disks, raw disks, filesystem disks etc. as you want with maximum performance. If you wanted to have some guaranteed performance you could pin a few CPU cores to the hypervisor so PCIe and ZFS are never starved for processing power.

I do remember with some of the EPYC configurations your PCIe lane-to-core via I/O mapping can get important when you use a lot of NVMe, especially on high loads. So if you can check the topology (I think on the STH homepage this comes up frequently enough) so you know how everything is connected to each CPU you can avoid loading up the wrong cores.
 
  • Like
Reactions: compgeek89

compgeek89

Member
Mar 2, 2019
56
32
18
NVMe HBAs are a thing, but you usually don't need them, especially given that the H11DSi has built-in NVMe ports.

The rear 2x NVMe part is MCP-220-82619-0N.
You are right. I am thinking the H11DSi NVME ports will cover me on that end, even if I have NVME on those rear ports (which I hope). So we can probably do just SATA/SAS on the card.
 

compgeek89

Member
Mar 2, 2019
56
32
18
I think the free version of ESXi is dead in the water, but Proxmox will do everything you want, including ZFS. As long as you get the connectivity for the backplane sorted (HBA, PCIe or otherwise) you'll have that covered.

Depending on what disk layout you are interested in, it might be possible to put Proxmox on some low-cost SATA SSDs in RAID1 (software RAID - easy to setup on the installer) and not use any storage disks for that. It won't really impact your performance anyway, but with the benefit of being able to boot even if your hypervisor storage gets sad it's an easy $50 to spend. Then putting all the other disks in a big pool will give you maximum flexibility; you can make as many virtual machines, disks, raw disks, filesystem disks etc. as you want with maximum performance. If you wanted to have some guaranteed performance you could pin a few CPU cores to the hypervisor so PCIe and ZFS are never starved for processing power.

I do remember with some of the EPYC configurations your PCIe lane-to-core via I/O mapping can get important when you use a lot of NVMe, especially on high loads. So if you can check the topology (I think on the STH homepage this comes up frequently enough) so you know how everything is connected to each CPU you can avoid loading up the wrong cores.
All good points. Here's the breakdown. Looks like my M.2 slot is on CPU1, along with most of my PCIe lanes (RAID card) but the NVME ports are on CPU2.

1678724403967.png
 
  • Like
Reactions: oneplane

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
Looks like they are also all routed to die 1, so in a way that is beneficial:

- Leave the cores on that die for NVMe I/O
- Put Proxmox/ZFS core affinity on different cores on the same die if enough room is available (depending on the amount of NVMe drives)
- Put I/O heavy machines on the same CPU but on a different die

Benefits are mostly:

- Base NVMe I/O doesn't have to travel to other dies or the other CPU
- DRAM is going to be used (at least as a block cache) so having all of that on the same NUMA node helps a lot
- Any storage interrupt based routing can be scoped to one CPU, so you'll get maximum performance on the other one, and on the 'storage' one you can probably save a ton by scoping it to a single die

While I don't have the details at hand, there was some issue with NVMe storage and lane or interrupt clashing, not sure if that was software routing in UEFI or in die I/O die or elsewhere, but even on SuperMicro hardware that used to be something that doesn't always work out of the box, regardless of what combination of hardware. I think even L1Techs and LTT had videos on that. On the other hand, SM did push plenty of updates since then, and after some configuration fixes performance was really amazing. The cases that come to mind were also all Linux cases where it was fixed, but on BSD and Windows it was still iffy. (Windows never performed, and BSD was inconsistent) So in a way, Linux and ZFS (or even ext4+LVM+md) is still the best way to go.

Edit: your PCIe x8 slot on CPU2 can do NVMe too, that just requires a (mostly) passive adapter you plug in to the slot to get more NVMe connectors. If you want SAS (or SATA), that slot could still be the right choice, and then you could use an LSI HBA (or IT mode flashed card) for more ports.

By the way, if you don't need more than x16 performance, you can always go for one of these: PCIe 4.0 Card Hosts 21 M.2 SSDs: Up To 168TB, 31 GB/s :p
 
Last edited:
  • Like
Reactions: compgeek89

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I like the idea of using IT-mode flashed RAID card. This way it can be used as a traditional RAID card or with ZFS with just a firmware flash.

Maybe someone can recommend a specific model which can be successfully cross-flashed IR/IT. I've only done this with the Oracle F40/F80 cards, never with an HBA/RAID. Possibly the LSI 9361-8i can do this?

For the 2-bay rear cage, is this the same thing (Look at the pics in Post # 38) - https://forums.servethehome.com/ind...1ctr12l-2u-storage-server-review.16546/page-2

If so, it looks to have oculink connectors like your motherboard, so you'd need 2 of these cables: https://www.amazon.ca/Dilinker-OCulink-SFF-8611-OCuLink-Cable/dp/B07H5GB231

So far that takes care of your SAS/SATA drives, and your NVMe rear cage. For your server OS M.2 drive, if you want to go Optane, something like this may be a good choice: Intel Optane SSD P1600X SSDPEK1A118GA01 M.2 2280 118GB PCIe 3.0 x4 735858481557 | eBay - It says it's new in the original package.

If you go with Proxmox+ZFS this would let you do software RAID with the front & rear drives. If you want to try ESXi later-on, you could flash your HBA to IR firmware, do hardware RAID with your front drives, and use your NVMe drives as individuals.

All of this talk makes me want to build a new server.
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I kinda agree (also want to build a server now :D ), excellent flexibility and also performance with NVMe directly to the CPU, other drives over a good LSI card with IR and IT modes. I have used a bunch and as long as there is some 'live' topic about it on the forum it's probably going to be fine. I've also flashed Dell and HP cards, but those are a bit of a pain due to the mis-matching of IDs between stock cards and vendor-specific ones. The classic IBM and Lenovo branded LSI cards are good options as well.
 

compgeek89

Member
Mar 2, 2019
56
32
18
I feel like I'm trying to drink out of a fire hose! So much great info here, thank you guys. I will take a look at my rear backplane tomorrow and report back and try to figure out final direction.
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I can say with high confidence that if you use some structure where bulk data goes on a 'normal' pool but OS and Application stuff goes on an 'extra fast' pool, the NVMe is worth it; you trade in some flexibility, but that is almost always worth the improvement on IOPS and bandwidth.

If you want to not buy more specific supermicro gear, you can get one of those supermicro 2x NVMe M.2 add-in cards. It requires bifurcation, but you have that so that's fine. On the other hand, the NVMe ports you already have are mostly used for NVMe storage... so that leans more towards the chassis-specific cages. Choices choices choices...
 
  • Like
Reactions: compgeek89

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I like the option of replacing the chassis cage, since it's $86, which is in the same ballpark as the add-in cards anyway, and as oneplane pointed out it fits right in your box.

New 2.5" NVMe drives are outrageous, but used you could consider a pair of something like this: DELL 800GB 2.5" SSD U.2 NVMe PCIe Gen3 KWH83 0KWH83 MZ-WLL800A PM1725a - 5 DWPD | eBay

Enterprise U.2 drives come in different thicknesses, as long as the cage bays are deep enough, you're good. For many workloads NVMe has a very noticeable advantage. When I download Linux ISOs from Usenet the speed really helps with the repair and unpack processes.
 
  • Like
Reactions: compgeek89

compgeek89

Member
Mar 2, 2019
56
32
18
I will probably get the cage. Is there any way (e.g. an adapter) to use a consumer M.2 drive in those bays? Or is an enterprise SFF NVME required?