Broadcom 9500-8i, NVME U.2/U.3, Tri-Mode

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

john389

Member
May 21, 2022
32
10
8
I just wanted to share my experience with this HBA when it comes to U.2/U.3 drives, because it was quite frustating and cost me a lot of money:

  1. U.2/U.3 drives normally register as /dev/nvme[0-9]+n[0-9]+ where the n[0-9]+ represents the namespace number. The Broadcom Tri-Mode controller however only supports the drives as SCSI devices, meaning they will show up as /dev/sd[a-z]. As a consequence, you have NO support for any kind of nvme functions, like creating namespaces, changing the namespace lba format size, etc.

  2. The Broadcom cable 05-60006-00 (x8 8654 to 8xU.3 Direct, 1 meter) suggests that you can attach 8 U.3 drives (not U.2, as U.3 drives can be plugged into U.2 ports, but not vice versa!) to one of the SlimSAS SFF-8654 8i outputs (the 9500-8i only has one of those). However, it does not use the PCIe 4.0 x8 lanes with which the card is attached to the motherboard in a switch chip way, but rather "splits" them, meaning that with this cable each of the 8 U.3 drives would only get one x1 PCIe lane, which is NOT SUFFICIENT for some U.3 drives, like my Micron 7400 Pro. The U.3 drive is not recognized, it just doesn't show up.

    There are a few additional requirements for it to function. From the manual:

    Enables direct connect from the adapter to a U.3 NVMe or SAS/SATA drive. This cable does not send
    a PCIe REFCLK or PERST# to each drive connector; that is, the U.3 drive must support SRIS and not
    require PERST#. Use for proof-of-concept type applications.
    When it comes to connecting SATA/SAS drives to this cable everything works as expected.

  3. Only the Broadcom cable 05-60005-00 (x8 8654 to 2xU.2 Direct, 1 meter) allowed me to connect two Micron 7400 Pro to the adapter. This effectively limits the 9500-8i to two U.2/U.3 drives, no more. Considering it is using x8 lanes to the motherboard, you'd be better off with a bifurcation PCIe 8x to 2x U.2/U.3 card. It costs far less than the Broadcom HBA and gives you full access to the nvme controls of the drives.

I have no idea how a backplane would fit into this and whether or not it would have similar problems.
 

Ortang

New Member
May 23, 2023
3
0
1
Thank you for that information.
I have exactly the same setup (BC 9500-8i HBA, BC cable 05-60006-00, 6 x Micron 7400 Pro) and could not get it working.
 

john389

Member
May 21, 2022
32
10
8
That is why I posted it :)

You are welcome.

If you have enough free PCIe slots I'd recommend one x16 to 4x U.2 adapter and one x8 to 2x U.2 adapter. Or something like it.

Just ensure that you have enough cooling.

Alternatively look into https://forums.servethehome.com/index.php?threads/supermicro-aoc-slg3-8e2p-octaport-nvme-hba.25551/

I bought a used one three months ago and haven't had the time to even install and test it ... In theory it should work for our use case, but you'll have to test/risk/research it yourself. It's also not cheap, but then neither is the Broadcom controller.
 

ano

Active Member
Nov 7, 2022
456
162
43
we have it working with CD6, CM6 and most u.3 drives really, cannot rememeber all.

however... PERFORMANCE IS HORRIBLE, so its pointless really.
 

ano

Active Member
Nov 7, 2022
456
162
43
anything with retimer/direct and u.2 = yay

anything needing u.3 seems to be quite sad as of today
 

john389

Member
May 21, 2022
32
10
8
Thanks for the info. As the Supermicro controller is a switch chip, I'll have to see if it is "yay", but I hope so.
 

john389

Member
May 21, 2022
32
10
8
@ano: One last question, when you say you've got U.3 drives running with the HBA 9500, you did mean with the cable 05-60006-00 and not with a backplane, correct?
 

ano

Active Member
Nov 7, 2022
456
162
43
via hpe gen10plus u.3 backplane (8 drives), as that was easiest thing to test with, had the u.2 version in the next slot, so could move drive between and performance test

switch chip nvme should be good.

the issue seems to arrise as it translates scsi commands in, to nvme out, delay on delay, and in general weird results.

we tested different u.3 controllers as well, like mr416-p and -a
 

john389

Member
May 21, 2022
32
10
8
Ah okay, that makes more sense, as I already thought that a backplane might help with the issues we had encountered with the cable.

What really troubled me was the following info in the cable specifications (bold formatting is mine):

Use for proof-of-concept type applications.
So, as that isn't mentioned for the other cables, this is what exactly, an experimental cable that is sold for $100?
 

ano

Active Member
Nov 7, 2022
456
162
43
I recently payed $250.. for nvme gen5 to u.2 direct cables... so yeah sometime things are a ripoff (they do work though) gave me 6x gen5 u.2 drive connections in lab on h13ssl-i
 

john389

Member
May 21, 2022
32
10
8
I'm well aware of what high performance hardware costs, I don't have a problem with a $100 cable, at least not if it's working, but if it is practically designed for some edge cases and most drives don't work with it, then that is something that should be part of the normal product description. At least that is my opinion.

And if you have gen5 drives and an epyc 9004 processor, then you can also pay $250 for a few cables to run it together ;)
 

mattventura

Active Member
Nov 9, 2022
267
104
43
Yeah, the Tri-mode HBAs are niche products.

U.2 tri-mode is incredibly pointless. U.2 uses disjoint pins for PCIe and SATA/SAS, so any given cable to the HBA would only ever carry one protocol or the other. That is, a 9400-16i really offers no obvious advantage over a 9300-8i and a x4x4 redriver/retimer/switch.

U.3 tri-mode actually has a purpose which is that you can have a direct-attach backplane with hybrid slots, and not have to have unused HBA or cabling capacity. You pay quite a bit for that privilege, so whether or not it's worth it depends on your use case.

However, these do have a few niche uses. If you only have one slot, but need to use both SAS/SATA and NVMe drives, then a 16i would fit that bill. Also, they're not bad value as just a plain SAS controller - Broadcom/LSI doesn't seem interested in making pure SAS HBAs anymore, they're going all-in on tri-mode. But one noticeable advantage is that these will give you near-perfect NVMe hotplug on any system, no matter how poor its true PCIe hotplug support is. As OP mentioned, these expose NVMe drives as SAS drives, rather than attaching them into the system's real PCIe topology.
 
  • Like
Reactions: dawsonkm

john389

Member
May 21, 2022
32
10
8
Because I mentioned it and had time to test it today:

The Supermicro AOC SLG3-8E2P switch with the CBL-SAST-0956 cable works with my Micron 7450 Max U.3.

It (the controller) shows up as: PMC-Sierra Inc. PM8533 PFX 48xG3 PCIe Fanout Switch.
 
Last edited:
  • Like
Reactions: Ortang

Ortang

New Member
May 23, 2023
3
0
1
Because I mentioned it and had time to test it today:

The Supermicro AOC SLG3-8E2P switch with the CBL-SAST-0956 cable works with my Micron 7450 Max U.3.

It (the controller) shows up as: PMC-Sierra Inc. PM8533 PFX 48xG3 PCIe Fanout Switch.
Thank you again! :)
Just ordered the controller plus the cables you mentioned.
 

john389

Member
May 21, 2022
32
10
8
I hope it works for you as well!

Just a few more notes for anyone else interested in the controller:

The U.3 SSD showed up as /dev/nvme0, including namespace, etc. No driver was necessary. A simple read test with dd, which is not the best to measure nvme performance, showed roughly 3.2GB/s. Looking at lspci it told me the SSD got downgraded from possible 16 GT/s (roughly 2 GB/s per PCIe lane for PCIe 4.0) to 8 GT/s (roughly 1 GB/s per PCIe lane for PCIe 3.0). According to Supermicro it is a PCIe 3.0 card, which matches the PM8533 documentation, and I had it plugged into a PCIe 3.0 x16 slot. So, excluding overhead, you'd be able to get a maximum of 4GB/s per SSD and if you have all 8 ports in use, then a maximum total of 16GB/s due to the x16 slot.

For my tasks, where the SSDs are always in some kind of RAID configuration, this is more than enough. Especially because the SSD can't sustain those speeds for random io and I mainly need fast SSDs for VMs.

If possible please let me know if all 6 of your SSDs work with the card, as I don't have enough U.2 or U.3 drives laying around to test it. Thanks!
 

ericloewe

Active Member
Apr 24, 2017
171
76
28
29
It (the controller) shows up as: PMC-Sierra Inc. PM8533 PFX 48xG3 PCIe Fanout Switch.
It's a PCIe switch, not a controller per se. That's relevant because PCIe switches should "just work" (the major exception might be hot-plugging, which probably varies from switch to switch) out of the box an any OS that understands PCIe.
 

Ortang

New Member
May 23, 2023
3
0
1
I hope it works for you as well!

Just a few more notes for anyone else interested in the controller:

The U.3 SSD showed up as /dev/nvme0, including namespace, etc. No driver was necessary. A simple read test with dd, which is not the best to measure nvme performance, showed roughly 3.2GB/s. Looking at lspci it told me the SSD got downgraded from possible 16 GT/s (roughly 2 GB/s per PCIe lane for PCIe 4.0) to 8 GT/s (roughly 1 GB/s per PCIe lane for PCIe 3.0). According to Supermicro it is a PCIe 3.0 card, which matches the PM8533 documentation, and I had it plugged into a PCIe 3.0 x16 slot. So, excluding overhead, you'd be able to get a maximum of 4GB/s per SSD and if you have all 8 ports in use, then a maximum total of 16GB/s due to the x16 slot.

For my tasks, where the SSDs are always in some kind of RAID configuration, this is more than enough. Especially because the SSD can't sustain those speeds for random io and I mainly need fast SSDs for VMs.

If possible please let me know if all 6 of your SSDs work with the card, as I don't have enough U.2 or U.3 drives laying around to test it. Thanks!
Sorry for the late reply, but i had no time to test it until now.
All 6 SSDs work, just as expected!

Thanks again for your help!