Pin config issues on U.2 ports to NVMe M.2 IcyDock cage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Medicineman25

New Member
Nov 29, 2021
9
2
3
I've recently purchased a Gigabyte MZ72-HBO mobo and there are two U.2 NVMe (#38 & #39 on the board diagram in the user manual) ports attached to the CPU0.

Rather naively, I assumed this meant that I could connect a SlimSAS to MiniSAS cable and run the Icy Dock MB720M2K-B 4x M.2 NVMe cage out of these ports (2 slots out of 4 of course, with the remaining slots connecting to the remaining mobo slimSAS ports).

As it turns out, the U.2 and M.2 standards are not just physical connectors but also incidentally pin configurations! Which of course they are (duuuhhhh!) and I had no idea they differed so drastically.

So, IcyDock have advised using some kind of AOC with MiniSAS (SFF 8643) connectors. Cool, seems fair enough.

My confusion has arisen from the amount of lanes, throughput, compatability, and vast array of HBAs available.

So far I've worked out that I need the following:

- 16 PCIe lanes (4 lanes per m.2 drive)
- 16 ports (and that ports/lanes are diff, but incidentally correspond)
- PCIe 4.0

The closest I can find is the following card from Broadcom:


However it's PCIe x8 and has 8654 connectors. Can I run two 8654 which split into 2x each 8643 cables to the cage? Would I need two of a similar card with perhaps 8 ports instead of 16 due to the PCIe lane bottleneck? (Seems a waste as I'll essentially lose 8 lanes on one of the slots) Will I be limited in terms of throughput and will the pins even provide compatability?

It's all very confusing. As it should be.

Finally, the only reason I want this cage is for the obvious: live hot swap of drives in the event of failure. Do these HBA cards facilitate that kind of functionality?

EDIT:
Having spent some time thinking about this solution, it seems rather silly.

Instead of trying to fit a square peg in a round hole, I will simply return the M.2 hotswap cage and use the M.2 drives as-is via an existing pair of PCIe cards.

Upon making further upgrades, I will simply purchase a series of U.2 drives with the appropriate hot swap cage and chock this up to a learning experience.
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
THE solution to your problem, now, and far into the future, is the Broadcom P411W-32P [Product Brief] [User Guide] . (PCIe gen4 x16, 8*NVMe, SFF-9402 compliant, full+ Surprise/HotPlug/Swap support)
Note that the mechanical/analog part of hot-swap is easy. It's the electrical/digital part that is hairy.

Speaking of the (near-)future, check out E1 and E3 EDSFF to Take Over from M.2 and 2.5 in SSDs
 
  • Like
Reactions: Richard Sanchez

Medicineman25

New Member
Nov 29, 2021
9
2
3
Hey @UhClem oh cool thanks for that... so I'm getting into this with slightly outdated information. I've run Linux for years and been a developer for nearly a decade, prior that I worked in entertainment, so I know how to tinker haha and server is a very new world for me so again thanks for the tip :)

I assume that by you offering up this card as an idea, this means that the native U.2 ports on the Gigabyte mobo do not support hot swap for NVMe?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
...
I assume that by you offering up this card as an idea, this means that the native U.2 ports on the Gigabyte mobo do not support hot swap for NVMe?
[Hey, this stuff is new to me too; retired 25 yrs ago. Strictly a (retirement) hobby now; like, gardening for geeks. But, I'm trying to approach it as a (albeit, unpressured) professional.]

I'm NOT certain of the non-hotswap of your mobo. But if they did have it, they'd "brag" about it (document it). If you google for >>nvme hot swap site:xxx.com<< replacing xxx with intel, hpe, hp, dell, supermicro, you see that the (meaningful) hits are for purpose-built integrated configurations (intel also has one or two subsystems). Note that the product brief for the LSI 9500-16 card doesn't mention hot-swap, but for the P411-32p it is prominent and detailed.

Yes, this is all conjecture and deduction; I could be wrong.

[Note that Intel has implemented NVME hot swap (aka hot plug) in the Xeon Scalable CPUs [Link] ]

If you don't NEED hot-swap, you can put 4x M.2s on a single simple PCIe x16 card, using PCIe slot bifurcation (nee quadfurcation); x16 ==> x4x4x4x4 (BIOS setting). No cages, cables, pinout snafus, etc.
 
Last edited:
  • Like
Reactions: itronin

Medicineman25

New Member
Nov 29, 2021
9
2
3
Oh very cool, hope you're enjoying retirement!

That's the difficulty I'm having and is expected when stepping into a newish space, is the lack of context of historical development. I thought live hotswap was a challenge of yesteryear and merely assumed now, kind of like how IOMMU is just standard at levels where it's expected. It very well could be although again, I don't have the historical awareness to know if that is only a recent development.

I'll have to dig around more.

EDIT: According to this article U.2 "is hot swappable" What you can do with that U.2 port on your motherboard next to the SATA inputs | Poc Network // Tech

... and it appears to be that way by default for any system that supports NVMe using PCIe on U.2. On this board it appears as though some or all of this support is pushed from the CPU. Looking at any EPYC build supplier, they mention hotswap bays which could just mean the mechanical bay itself or it could be an assumed feature of NVMe using PCIe on U.2.

This could all be completely wrong. Shooting in the dark at this point but doesn't really matter: I'll just suck it and see.
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
I hope I'm wrong. (I want to be wrong!)
... Shooting in the dark at this point but doesn't really matter: I'll just suck it and see.
Good for you. If you mean you are going to start by connecting your mobo U.2 port(s) #38 (& #39) to 1 (or 2) of the IcyDock cage's ports, you might try this cable. If Gigabyte and IcyDock adhered to the SFF-9402 spec, it should work. You could contact "Micro SATA Cables" and verify that their RSL38-0501 cable does meet SFF-9402 pinout spec, for both SAS & PCIe use. Below are the pinouts for the cable, and the pinout spec for the two connectors (Root=SFF8654 & EndPoint=SFF8643). It looks like a match, but my eyes glaze over looking at that stuff (I hated EE 55 years ago, and still do; I'm SO thankful to have discovered software at my first job.)
rsl38-0501_-pinout.jpg

SFF9402-p25.jpgSFF9402-p25.jpgrsl38-0501_-pinout.jpg

Or, do you intend to go for a card? Which?

Or, fall back to
... I will simply return the M.2 hotswap cage and use the M.2 drives as-is via an existing pair of PCIe cards.
Which cards are those? And what M.2 drives do you have?

[Edit/Add:
(2 slots out of 4 of course, with the remaining slots connecting to the remaining mobo slimSAS ports).
Do you have both CPU0 & CPU1? (Those "remaining" SlimSAS ports look to be on CPU1.)
]
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
Update: SFF-9402
Q: When is a (citation of a) specification NOT a specification?
A: When an oldtimer, who should know better, does not read it carefully.

[Ref: SFF-9402 Specification]
(in my above post) The wiring description (from pg. 25) was prefaced (on pg. 14) with:
NonCompat-9402.jpg

So, it looks like it's still iffy. Gigabyte makes no mention (in the mobo manual) of any standard/spec for the SlimSAS connectors. Given the mobo vintage (2020-2021), they should be SFF-9402, but why should the customer have to roll the dice? Also, IcyDock's Spec for the MB720M2K-B makes no mention of pinout for the SFF-8643; but it does state a Transfer Rate of "Up to 64 Gbps" (for 4 lanes, 16 per lane) [See Observation below]

@Medicineman25 , note that there is a V2 of that IcyDock cage. Its specification explicitly states PCIe 4.0, and uses SFF-8612 connectors, documenting them as "pinout defined by SFF-9402 v1.1".

Observation: The SFF-8643 was commonly used for SAS-3 & PCIe gen3 connections, using (per port/lane) data rate specs of 12 Gbps & 8 Gbps, respectively. In fact, on pg 11 of the 9402 spec, the Spec Document for SFF-8643 is listed as
SFF-8643 Mini Multilane 4/8X 12 Gb/s Unshielded Connector (HD12un)
Ergo, if one truly wants to minimize risk, they might want to avoid using SFF-8643 for PCIe4 (16 Gbps) [& SAS-4 (24 Gbps)].
 

Medicineman25

New Member
Nov 29, 2021
9
2
3
Update: SFF-9402
Q: When is a (citation of a) specification NOT a specification?
A: When an oldtimer, who should know better, does not read it carefully.

[Ref: SFF-9402 Specification]
(in my above post) The wiring description (from pg. 25) was prefaced (on pg. 14) with:
View attachment 20705

So, it looks like it's still iffy. Gigabyte makes no mention (in the mobo manual) of any standard/spec for the SlimSAS connectors. Given the mobo vintage (2020-2021), they should be SFF-9402, but why should the customer have to roll the dice? Also, IcyDock's Spec for the MB720M2K-B makes no mention of pinout for the SFF-8643; but it does state a Transfer Rate of "Up to 64 Gbps" (for 4 lanes, 16 per lane) [See Observation below]

@Medicineman25 , note that there is a V2 of that IcyDock cage. Its specification explicitly states PCIe 4.0, and uses SFF-8612 connectors, documenting them as "pinout defined by SFF-9402 v1.1".

Observation: The SFF-8643 was commonly used for SAS-3 & PCIe gen3 connections, using (per port/lane) data rate specs of 12 Gbps & 8 Gbps, respectively. In fact, on pg 11 of the 9402 spec, the Spec Document for SFF-8643 is listed as

Ergo, if one truly wants to minimize risk, they might want to avoid using SFF-8643 for PCIe4 (16 Gbps) [& SAS-4 (24 Gbps)].
I agree, it shouldn't be this difficult and the manufacturer should do better. I'm ditching the m.2 drives in favour of a u.2 drive solution, which appears to be the common approach. Every video I've seen of rack servers being opened, I can see the OcyLink/SlimSAS cable running straight to the hotswap bay at the front. Whether or not that is running to some kind of daughter card which supports live hotswap via an external function similar to that which would be served by a PCIe card, I am still unsure.
 
  • Like
Reactions: UhClem

Medicineman25

New Member
Nov 29, 2021
9
2
3
I hope I'm wrong. (I want to be wrong!)

Good for you. If you mean you are going to start by connecting your mobo U.2 port(s) #38 (& #39) to 1 (or 2) of the IcyDock cage's ports, you might try this cable. If Gigabyte and IcyDock adhered to the SFF-9402 spec, it should work. You could contact "Micro SATA Cables" and verify that their RSL38-0501 cable does meet SFF-9402 pinout spec, for both SAS & PCIe use. Below are the pinouts for the cable, and the pinout spec for the two connectors (Root=SFF8654 & EndPoint=SFF8643). It looks like a match, but my eyes glaze over looking at that stuff (I hated EE 55 years ago, and still do; I'm SO thankful to have discovered software at my first job.)
View attachment 20699

View attachment 20698View attachment 20698View attachment 20699

Or, do you intend to go for a card? Which?

Or, fall back to

Which cards are those? And what M.2 drives do you have?

[Edit/Add:

Do you have both CPU0 & CPU1? (Those "remaining" SlimSAS ports look to be on CPU1.)
]
Slight correction, I'm going to get their other cage which is purpose build for U.2 drives and connect *that* directly to the U.2 ports. If that doesn't support live hotswap, then I'll have to decide if I want to sacrifice a slot to the storage gods (which let's be honest I probably will haha).
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
... I'm ditching the m.2 drives in favour of a u.2 drive solution, which appears to be the common approach.
Sounds like a good plan!
{Pls follow-up with hot-swap outcome.)

Note that IcyDock also has a V2 of their U.2 cage, with the same changes (noted above): connector:sff-8643==>sff-8612 && pinout:??==>sff-9402

Enjoy your new system!
Really neat that such capable toys are so accessible/affordable.
[especially, from the perspective of someone who hacked PDP-10 (in '68) & PDP-11 ('72)] (fractional-MIPS CPUs)
 
  • Like
Reactions: Medicineman25