Dell R730xd with PCIe Extender for U.2 NVMe

microserf

New Member
May 20, 2019
20
8
3
I'm looking for a sanity check and maybe some direction from anyone with a Dell PCIe Extender in an R730xd 24+2 bay server.

I've purchased two PCIe Extenders from two different sources. One new from China that came with LP and FH brackets has a GY1TD part number. The other was purchased used from a US seller with a FH bracket and has a P31H2 part number. GY1TD is the Dell part number for a card with a LP bracket for an R630 10 bay and P31H2 is the Dell part number for a card with a FH bracket for an R730xd SFF.

They're being installed in slot 4, per the manual, and connected to the backplane using the custom length Mini SAS HD cable with Dell part number 1PDFM. I've got three of these cables, purchased from two different vendors.

The cards are recognized and appear in iDRAC->Storage->Controllers when installed and connected to the backplane. The used U.2 Intel P3600 NVMe I picked up to test all of this with is absent when an HBA330 mini is installed. If an H730P mini is installed instead of the HBA330, the U.2 SSD is found by CentOS 8.2 and the newly discovered NVMe controller happily reports device info.

Does the PCIe Extender work with an HBA330 mini?
 

microserf

New Member
May 20, 2019
20
8
3
After ending up on the forums for an unrelated subject after being absent for a while, I remembered this thread and figured I should update it.

TL;DR it was a PEBKAC issue.

Thanks for the link to the firmware thread. I'd looked through it when I was considering a card swap in an R330 and found it informative. It's become more so since.

I actually have three R730xd servers. One of them is in the homelab rack and the other two were picked up for a project. When the PCIe Extender started giving me problems, I tried it in each machine. The original machine in the homelab rack picked it up and worked fine. The other two continued to frustrate me. There's a long story of the lengths I went to trying to get it working in the two project servers but, in the end, I discovered it was an incorrect BIOS setting.

In iDRAC, System Setup -> Advanced Hardware Configuration -> System BIOS -> Integrated Devices -> Slot Disablement:
Global Slot Boot Driver Disable set to Disabled
Slot 4
set to Enabled

R730xd PCIe Extender slot settings.png
 

ericloewe

Member
Apr 24, 2017
33
5
8
28
Does the PCIe Extender work with an HBA330 mini?
To elaborate on what is going on, U.2 backplanes have separate connections for SATA/SAS and NVMe all the way through to the disk. This means that, conceptually, they're completely independent.
U.3 does share differential pairs between SATA/SAS and NVMe. Since U.3 disks must support U.2 backplanes, I imagine it was done to cut down on the number of differential pairs that need to be routed on backplanes. It may seem like a secondary concern, but with the air holes in the way, PCB layer counts can quickly blow up. It's most relevant with the mythical tri-mode expanders, which Broadcom seems to have priced up into irrelevance.

In any case, the Dell card has me wondering why it has the large heatsink. The R630 (and R730, I presume) supports x4/x4/x4/x4 bifurcation of the PCIe x16 slot, so a PCIe switch isn't needed or advantageous. At most you'd need some redrivers for signal conditioning, but those don't justify a heatsink. so what is Dell doing with that card?
 

microserf

New Member
May 20, 2019
20
8
3
I probably should have stated: a Dell PCIe Extender (P/N: P31H2 or GY1TD) works with the HBA330 mini, H730 mini, and H730P mini in an R730xd. The H330 mini was not tested.
In any case, the Dell card has me wondering why it has the large heatsink. The R630 (and R730, I presume) supports x4/x4/x4/x4 bifurcation of the PCIe x16 slot, so a PCIe switch isn't needed or advantageous. At most you'd need some redrivers for signal conditioning, but those don't justify a heatsink. so what is Dell doing with that card?
Layout wise, slots 4 and 6 have sixteen lanes. The cable (P/N: 1PDFM) is sized to mate with the card when seated in slot 4. It won't reach slot 6. That's a problem if you want to stick, say, a Tesla double-slot GPU in the 2U case.

Slots 4 and 6 have three bifurcation settings: Default, x4 x4 x4 x4, x8 x8. I ended up using "Default."

R730xd PCIe Extender slot 4 bifurcation options.png R730xd PCIe Extender slot bifurcation settings.png
 
Last edited:
  • Like
Reactions: R730XD-Plex

R730XD-Plex

New Member
Mar 5, 2021
2
0
1
I probably should have stated: a Dell PCIe Extender (P/N: P31H2 or GY1TD) works with the HBA330 mini, H730 mini, and H730P mini in an R730xd. The H330 mini was not tested.

Layout wise, slots 4 and 6 have sixteen lanes. The cable (P/N: 1PDFM) is sized to mate with the card when seated in slot 4. It won't reach slot 6. That's a problem if you want to stick, say, a Tesla double-slot GPU in the 2U case.

Slots 4 and 6 have three bifurcation settings: Default, x4 x4 x4 x4, x8 x8. I ended up using "Default."

View attachment 17614 View attachment 17615
Thanks a lot for posting this feedback. I've been driving myself crazy for a month trying to get PCIe boot enabled on my R730xd. I don't have it all tied up yet, but this is definitely a step closer.
 

R730XD-Plex

New Member
Mar 5, 2021
2
0
1
As a follow up for anyone else, I found several more important pieces of info:

1. The R730XD with the 2.5" SSD backplane can accept the Dell PCIe NVMe enablement kit, patching into the available miniSAS connectors. After that, it'll take U.2 PCIe NVMe SSDs on the front, 4 right 2.5" bays. It's not m.2 form factor, but it's the same capability, so Bob's your Uncle. This is PCIe NVMe boot enabled.

If you don't have that 2.5" SSD backplane and you have the 3.5" HDD one, then you're out of luck for using that PCIe NVMe enablement kit/wire harness. There are others, unutilized miniSAS connectors, but they don't get you any results.

2. If you have the 3.5" HDD backplane, then you can get Dell OEM# M7W47 BOSS (Boot Optimized Storage Solution) PCIe card. This card is for the R740XD, but I have confirmed fully stable PCIe NVMe boot with it on my R730XD.

The BOSS card takes 2x 22*80mm m.2 PCIe 3.0 x4 NVMe SSDs. It is iDRAC recognized and RAID compatible.

In my case, I cut the mounting screws off with a fine hacksaw and installed 2x 22*110mm Intel 905P 380GB PCIe NVMe SSDs. I secured them with high-heat zip ties and now I've got exactly what I wanted in the beginning.
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
There's an increase in the default PWM fan cycle when you install this enablement kit, isn't there? I've been told that the minimum fan % goes up to about 38%. Have you observed this?
 

anemoiac

New Member
Jan 7, 2021
13
3
3
There's an increase in the default PWM fan cycle when you install this enablement kit, isn't there? I've been told that the minimum fan % goes up to about 38%. Have you observed this?
I would also be interested in knowing the answer to this ^
 

TrumanHW

Member
Sep 16, 2018
114
8
18
If you're using ZFS ... any chance of getting NVMe benchmarks for the PowerEdge servers..?
630 / 730 / r720xd ..? :)

Any chance of tiering by in ZFS with U.2 NVMe + spinning drives..? :-D
(I'd love to know how exactly) ...

I have T320 and am contemplating adding NVMe drives ... but don't know the CPU is good enough.

THANKS!!