NVMe in a RAID config

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Pestilence

New Member
Mar 25, 2024
4
0
1
Picked up a couple of SuperMicro AS-1113S-WN10RT 1U servers. they currently have 5 out of the 10 bays with U.2 NVMe drives. I'm needing to place these drives into a RAID however the board doesn't support it and I picked up a Broadcom 9460-16I and 4x "Cable OcuLink PCIe PCI-Express SFF-8611 4i to Mini HD SFF-8643 SSD Data Active Cable 50cm" so that I could at least get 4 drives in a RAID configuration(would love to do more but from what I can tell I'm limited to 4). However with this setup the RAID controller isn't even able to see the backplane much less the drives. I keep seeing that I need U.2 enable cables but I haven't been able to find any that work with this backplane. I don't care about the speed loss of using a RAID controller or AOC and would even be open to swapping out the NVMe drives for SATA3 except only the last 4 slots on that backplane support it. I'm open to any suggestions about if it's possible to setup a RAID on these machines.

Mother board: H11SSW-NT
Backplane: BPN-SAS3-116A-N10
NVMe's: Micron 9300 Pro 3840G
 

jdnz

Member
Apr 29, 2021
83
22
8
what OS are you running? We've got 4 * u.2 in raid0 on one cpu/gpu server using the linux kernel soft-raid ( md ) driver - getting the performance we were after ( this is local highspeed scratch storage for image analysis ) and cpu impact is minimal.
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
The cable is fine. There's nothing special needed there.

The issues are either the backplane or the card. Looking at the card specs it supports NVME so...

Could be the cables but, if you bought a couple of them at least one should work.

Bypass the backplane and go direct to the drive and see if it comes online.
 

Pestilence

New Member
Mar 25, 2024
4
0
1
what OS are you running? We've got 4 * u.2 in raid0 on one cpu/gpu server using the linux kernel soft-raid ( md ) driver - getting the performance we were after ( this is local highspeed scratch storage for image analysis ) and cpu impact is minimal.
was going to load EXSi 8 onto it.

Tech, I thought the cables would work but from what I'm seeing maybe I should be using cable 05-50062-00 for the 94xx series. All the ones I saw were pretty pricey though so wanted to check on them more before ordering. The drives and everything ran great of just the motherboard only lost them when I switched to the RAID controller. but yes with 4 cables it should be seeing something.
 

jdnz

Member
Apr 29, 2021
83
22
8
was going to load EXSi 8 onto it.

Tech, I thought the cables would work but from what I'm seeing maybe I should be using cable 05-50062-00 for the 94xx series. All the ones I saw were pretty pricey though so wanted to check on them more before ordering. The drives and everything ran great of just the motherboard only lost them when I switched to the RAID controller. but yes with 4 cables it should be seeing something.
we're running esxi 7 on our system - the nvme are pcie pass-thru'd to the VM running rhel8 ( this is on a supermicro 740gp-tnrt which has 4 bays of u.2 direct off the motherboard )
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
I went Oculink direct from the board M2 to the drive and haven't had any issues. Using a cad with these drives is kind of moot since they're rocking top speeds. The only thing that comes to mind is if the card gets knocked down to x8 it will knock the drives offline most likely.

There are basic dummy cards you can use w/ 2 OCL ports that split out to 2 drives per port for ~$75 that would make things a bit easier to figure out if there's an issue where that issue lies.

When I was planning using multiple drives with such a card the dual cables were ~$45/ea where the single cable+m2 came out to ~$45 for the pair. Turns out I would have to go this route anyway as I ended up needing to consume the slot I had planned on using with a GPU
 

Pestilence

New Member
Mar 25, 2024
4
0
1
I went Oculink direct from the board M2 to the drive and haven't had any issues. Using a cad with these drives is kind of moot since they're rocking top speeds. The only thing that comes to mind is if the card gets knocked down to x8 it will knock the drives offline most likely.
I do agree that it brings the speed down and defeats the purpose of using NVMe drives. Personally I would rather not use NVMe drives but this is the hardware I have on hand at the moment and RAID is a need for my setup so I will take the hit on speed in exchange for redundancy.

we're running esxi 7 on our system - the nvme are pcie pass-thru'd to the VM running rhel8 ( this is on a supermicro 740gp-tnrt which has 4 bays of u.2 direct off the motherboard )
I haven't looked into software RAIDs for this build as I have always gone for the hardware route. But that path is currently failing me, do you know if using (MD?) will still work on EXSi 8 (I would assume/hope it does). Also is the 4 drives that you are using just what you needed or is that a hard limit on the software? If it's capable of doing all 10 drives in a RAID 5 I would switch gears to that.
 

jdnz

Member
Apr 29, 2021
83
22
8
I do agree that it brings the speed down and defeats the purpose of using NVMe drives. Personally I would rather not use NVMe drives but this is the hardware I have on hand at the moment and RAID is a need for my setup so I will take the hit on speed in exchange for redundancy.



I haven't looked into software RAIDs for this build as I have always gone for the hardware route. But that path is currently failing me, do you know if using (MD?) will still work on EXSi 8 (I would assume/hope it does). Also is the 4 drives that you are using just what you needed or is that a hard limit on the software? If it's capable of doing all 10 drives in a RAID 5 I would switch gears to that.
version of esxi doesn't matter as it's all handled in the VM itself - esxi just passes thru the raw pcie device to the vm

the limit was the chassis - it only HAS 4 u.2 bays, the md driver is extremely flexible and feature rich ( qnap and synology NAS units all use md under the hood as their hardware doesn't use ROC hardware ).

The only thing you may miss coming from hardware raid cards is the lack of a UI like MSM/LSA for management - md is all command-line ( I mainly manage hardware raid using the CLI tools so it didn't really worry me )
 

Pestilence

New Member
Mar 25, 2024
4
0
1
version of esxi doesn't matter as it's all handled in the VM itself - esxi just passes thru the raw pcie device to the vm

the limit was the chassis - it only HAS 4 u.2 bays, the md driver is extremely flexible and feature rich ( qnap and synology NAS units all use md under the hood as their hardware doesn't use ROC hardware ).

The only thing you may miss coming from hardware raid cards is the lack of a UI like MSM/LSA for management - md is all command-line ( I mainly manage hardware raid using the CLI tools so it didn't really worry me )
Going to try to setup a software RAID then and see how that works for me.

Thank you both for the help!!!
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
@Pestilence

I used MDADM for several years with no performance hit in a R10 setup direct SATA to drives. It's really flexible when it comes to making changes when needed. You can build a set w/ missing drives and when you add the new drives you add them to the pool and let them sync. If you need to change disk sizes you can pull a couple of drives from the pool to make a set to copy the data to in order to swap in bigger drives.

R10 only does pairs for increasing capacity but, I added a 5th drive to my pool and while I didn't gain capacity it was there as a hot standby if there was a failure.

If you need several U drives due price/capacity there are cards you can get for additional ports. I think the same goes for other methods of muxing more drives through different methods. It's just not very cheap to do. As for the VM it's just a guest and you can pass anything from the host to it like you normally would.