NVMe RAID on Supermicro 2024US-TRT (829U2TS-R1K62P3-T + H12DSU-iN)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Foksy

New Member
Sep 23, 2021
3
0
1
SPb, Russia
Hello everybody!
We are planning a server on two AMD EPYC Milan 7543 32C / 64T.
Platform - A + Server 2024US-TRT
12 hot-swap 3.5 "Drive Bays
are planned as
8 SATA3 + 4 NVMe via optional tray
Next: Broadcom 9361-8 for SATA RAID
Further is more difficult.
We need RAID for NVMe, not in order to be faster, but in order to be more reliable.
The solution is seen as
Broadcom / LSI 9560-16I SGL (05-50077-00) PCIe 4.0 x8 LP, SAS / SATA / NVMe, RAID 0,1,5,6,10,50,60, 16port (2 * int SFF8654), 8GB Cache, 3916ROC, RTL
+
2x 05-60007-00 Broadcom Cable, Slimline SASx8 (SFF8654) -to- Slimline SASx8 (SFF8654) + SFF9402, 1M

Has anyone dealt with a similar one?
How to organize NVMe RAID on EPYC in the absence of Intel VROC?

Solution in platform optional parts list
AOC-S3916L-H16iR-32DD - 16 int 12Gb / s ports, x8 Gen4, ROC - LP, 32 HDD w / exp, RoHS
1x CBL-SAST-1264F-100 Slimline x8 (STR) to 2x Slimline x4 (STR), FFC, 64 / 64CM
1x CBL-SAST-1265F-100 Slimline x8 (STR) to Slimline x4 (STR), 64CM, 100 OHM, RoHS
does not involve NVMe connection*

* I must correct myself: connection - yes, RAID - no.

Sorry for my english.
Newbie on forum, but not on servers!
 
Last edited:

jpmomo

Active Member
Aug 12, 2018
531
192
43
pm me for some tips on nvme raid with gen4 SSDs. I had a dell server with an H755N perc card (same as the broadcom that you listed) with 2xamd 7763 cpus and did some analysis of that configuration. It should work with your proposed config with a couple of caveats.
jp
 

Foksy

New Member
Sep 23, 2021
3
0
1
SPb, Russia
pm me for some tips on nvme raid with gen4 SSDs. I had a dell server with an H755N perc card (same as the broadcom that you listed) with 2xamd 7763 cpus and did some analysis of that configuration. It should work with your proposed config with a couple of caveats.
jp
Thank you very much!
Can you tell me more about the caveats, if possible?
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
The specific issue that I ran into (may not be an issue for you if you main concern is fault tolerance) was due to the # of pci lanes that was allocated by the controller card. Specifically, there was a total of only 8 pci gen4 lanes coming into the card. That card was connected to the backplane via 2 proprietary cables. the backplane had 8 x nvme pci gen4 u.2 ssds. If I had all 8 drives populated, they would only get 1 pci gen4 lane each. These drives were x4, or needed 4 gen4 lanes each to get their full bandwidth. The pros of these hw based nvme raid cards are that they have 8GB of cache and can apply different levels of hw-based raid. When I limited the ssds to just 2 and put in raid0, it would work as expected and performed very good. Even when intentionally bypassing the raid cards 8GB of cache, I would get around 14GBps of throughput. Each drive was capable of around 7GBps. I am not 100% sure the broadcom card has this same limitation and have not owned that specific card. I did look into it and tried to confirm that if there was this x8 lane bottleneck but could not get a response from anyone at broadcom. I even tried to contact some company that did a whitepaper testing this card but never heard back from them either.
 
  • Like
Reactions: uldise

RTM

Well-Known Member
Jan 26, 2014
956
359
63
I suppose someone has to ask this: Have you considered using software RAID (if available in the OS)?