Help me pick a raid card

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kemic

New Member
Aug 15, 2015
26
4
3
A Supermicro system I have has this backplane:




I’m looking to setup an nvme hardware raid to use as virtual machine storage in xcpng (4 drives). The other ports on this backplane I’m hoping to use for some traditional sas/sata drives on a separate hba.

Can anyone recommend me some card/cable options for this? You could also convince me that an nvme ZFS pool would be comparable performance. I haven’t researched this much yet and I would still need hardware recommendations :). The board in this system has pcie4 (h12ssl-i).

Thanks!
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
how much storage do you need?
what performance (4k iops or throughput in MByte/s) do you need?

the pcie 4.0 hardware raidcontrollers are pretty expensive (16port smartraid 3200 controller ~1900€) and it can make sense to skip it and buy a single 15 or even 30tb u.2 ssd (2100€ for solidigm d5-p5316) instead...
 

kemic

New Member
Aug 15, 2015
26
4
3
Oh wow. Yea that’s a bit out of budget. I was thinking something like the 9460-16i. Unless I’m mistaking its capabilities. I was interested in running just a small raid10 for VMs. Maybe 4x2TB u2 drives. Or even a raid1 mirror.

Since posting I’ve also been looking at the supermicro cards, AOC-SLG3-4E4T for example, probably more ideal budget wise as long as my available pcie slots support bifurcation. Having a non raid card like this, I suppose I could also do zfs … if the performance hit isn’t too large.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
I would worry less about the performance loss and look at how much performance you want/need. ZFS has a performance hit for NVMe, because a lot of the assumptions that made sense for HDDs and even for SATA/SAS SSDs are actually counterproductive with the raw speed of NVMe.

However, it may still be more than enough. For sequential workloads, your network will most likely be the bottleneck anyway. For things like VM boot volumes, the difference will be minimal.
 

kemic

New Member
Aug 15, 2015
26
4
3
If you’re on intel what about vroc?
h12ssl-I and epyc 7713.

I’ve been looking into just using the onboard nvme, looks like I can do 2 onboard and 2 on the slimsas breakout. But the 2 on board will be buried by GPUs, so they might get hot. And with the original goal of using that hybrid backplane - I’m now looking and m2 to oculink stuff in combination the slimsas to oculink breakout cable… This should give me the 4 oculink cables I need for the backplane. Also saves one of the two remaining pcie slots I have, bonus. If it all works out that is.

I’m for sure still tumbling down this rabbit hole, def open to any and all suggestions.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
287
63
Oh wow. Yea that’s a bit out of budget. I was thinking something like the 9460-16i. Unless I’m mistaking its capabilities. I was interested in running just a small raid10 for VMs. Maybe 4x2TB u2 drives. Or even a raid1 mirror.
You're better off buying some PCIe U.2 cards (dual or single) and using software RAID of some sort. The "tri-mode" cards are picky about cables, and they can't handle the full bandwidth of 4x NVMe drives, since they only use PCIe x8. They only good use case for the 9460 line is to connect to a backplane with a lot of drives for more total storage.
 
  • Like
Reactions: tinfoil3d

numanumani

New Member
Mar 26, 2022
4
1
3
Oh wow. Yea that’s a bit out of budget. I was thinking something like the 9460-16i. Unless I’m mistaking its capabilities. I was interested in running just a small raid10 for VMs. Maybe 4x2TB u2 drives. Or even a raid1 mirror.

Since posting I’ve also been looking at the supermicro cards, AOC-SLG3-4E4T for example, probably more ideal budget wise as long as my available pcie slots support bifurcation. Having a non raid card like this, I suppose I could also do zfs … if the performance hit isn’t too large.
The 9460-16i is a RAID card that cannot be flashed to act as a HBA. Running ZFS on a hardware raid card is highly discouraged.
 
  • Like
Reactions: tinfoil3d

Tech Junky

Active Member
Oct 26, 2023
351
120
43
@kemic

I see the wheels turning!!!

Backplanes adapters and bears oh my!

Ok.... so, Oculink will be the highest throughput @ bus speed per drive up to a Gen4 ~6.5GB/s / drive.

I don't know about the backplane or this / that but, you can get a Dual Oculink x16 card which would get you up to 4 drives at full speed using a single x16 slot and dual output cables x2. Card is cheap @ ~$70 and the dual ended cables are ~$50/ea (~$170 total + drives)

Not messing with all of the other stuff will get you top speeds w/o the issues associated with mixing and matching them. With a R10 setup you would be hitting ~13GB/s.
 

kemic

New Member
Aug 15, 2015
26
4
3
@kemic

I see the wheels turning!!!

Backplanes adapters and bears oh my!

Ok.... so, Oculink will be the highest throughput @ bus speed per drive up to a Gen4 ~6.5GB/s / drive.

I don't know about the backplane or this / that but, you can get a Dual Oculink x16 card which would get you up to 4 drives at full speed using a single x16 slot and dual output cables x2. Card is cheap @ ~$70 and the dual ended cables are ~$50/ea (~$170 total + drives)

Not messing with all of the other stuff will get you top speeds w/o the issues associated with mixing and matching them. With a R10 setup you would be hitting ~13GB/s.
I was looking at those for sure! Unfortunately this system has 3x MI25 GPUs in it so there’s only two free PCIe slots. One dedicated to 40Gbe Ethernet and the other I was planning an HBA for a TrueNAS VM (pcie pass thru). I suppose I could have the NAS be sata instead of sas. Haven’t bought the disks yet… Then I could leverage the slot for an nvme card and pass the onboard sata controller to TrueNAS.

I’m still leaning towards onboard nvme options, we’ll see how it goes!
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
two free PCIe slots. One dedicated to 40Gbe Ethernet and the other I was planning an HBA
Well, not really if you have a NIC in one. That leaves you with 1 slot open. You could still do the Oculink.

NAS be sata instead of sas
Depends on needs and budget. If you need the GB/s then you're at a minimum looking at SSDs in R0 or quite a few of them in R1/10.

M2 options would be a waste of time if you need speed + capacity as M2's max at 8TB where the U drives can do 15.36TB for the same price as the M2.

If you went with the OC adapter you wouldn't need to worry about the passthrough option as the disks show up as native drives to the system. You can also get dumb M2 > Oculink adapters aka what I'm using for ~$20 + cable ~$30 per drive. The x16 card is a better deal if you're shooting for density (4 drives). Not a huge savings but, a little tidier in the end with the cables.
 

tinfoil3d

QSFP28
May 11, 2020
880
404
63
Japan
any hardware raid is discouraged in 2024. level1techs recently covered it again on his channel.
broke my teeth with it a decade ago, since then hwraid is no-no for me.
we have zfs these days, and as for nvme, you're definitely better off using them as they are, any hwraid would very likely be a bottleneck now.
 

kemic

New Member
Aug 15, 2015
26
4
3
Thanks for all the input everyone! Ended up with a m2 > oculink > backplane > u2 (two drives) and a slimsas > oculink breakout cable > backplane > u2 (two drives). Should be no bottlenecks and PCIe4, though I don’t have a PCIe4 drive to test with yet…

Everything seems to be working as expected with my PCIe3 test drive. Now to figure out which u2 drives to buy…
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
This might be a bottleneck. SAS doesn't meet the same speeds needed for Gen4 drives from my understanding. Would be interesting to see though.

When I was going through the standards for all of the options to connect U drives SAS wasn't appealing at 6/12/24gbps as I was initially thinking of 4 drives eventually and those would hit ~28GB/s. The Oculink though hits those speeds per drive / x4 which was the driving factor at the time.

which u2 drives to buy
Personally I would stay away from the Micron drives. Look at the Kioxia instead as it runs cooler. KCD8XRUG15T3 is the model I'm using. It will you a table of different sizes / performance info.

Keep an eye on the details as performance specs will vary widely between different brands and even different models.
 

kemic

New Member
Aug 15, 2015
26
4
3
This might be a bottleneck. SAS doesn't meet the same speeds needed for Gen4 drives from my understanding. Would be interesting to see though.
.

Check number 6 on this link. I think I’m ok assuming this is accurate. :) x8 PCIe 4 lanes to the cpu. Break out cable has two oculink ports on it so I assumed that was x4 and x4.

Edit: I am seeing some info about 24Gbps limit “per lane”. It probably won’t be any concern for my workloads…
 
Last edited:

Tech Junky

Active Member
Oct 26, 2023
351
120
43
Well, 24gbps is 3GB/s which means you need at least a Gen3 / x4 slot per drive.

I still don't think you'd be getting max performance off xSAS. I could be wrong but, from my interpretation of the standards it's why I went with OCL instead. Adapters / cables were cheap enough in comparison to just DIRTFT.

However logic says if it's a dumb adapter just for electrical passthrough of data it should be able to hit whatever the speed limit is on either end.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
It's still a PCIe gen4 connection, they're just using the physical SlimSAS connector. The SlimSAS port on the H12SSL-i can't even do SAS at all, it can only be used as 8 lanes of PCIe or SATA. If you try to plug a SAS drive into it, it won't work (just like if you tried to plug NVMe drives into an actual SAS HBA, assuming it isn't tri-mode).

I'm currently using the SlimSAS ports on my ASRock TRX50 to run gen4 NVMe drives.
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
This might be a bottleneck. SAS doesn't meet the same speeds needed for Gen4 drives from my understanding. Would be interesting to see though.
slimsas ist just a connector, it doesn't know/care that it's carrying pcie, sas or other signals. same for the cables :D
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
slimsas ist just a connector, it doesn't know/care that it's carrying pcie, sas or other signals. same for the cables :D
Good info to know and file away for future use. I figured for the reduced / intended bandwidth there may be some signal integrity issues to deal with when using higher speed drives.