Dell YPNRC questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

josh

Active Member
Oct 21, 2013
615
190
43
The cage has nothing to do with powering up the backplane. It's just a piece of steel to hold the drives in place.
The backplane needs 12V through a Molex MicroFitJr3.0 8-pin connector - I just made an adapter from the Dell original cable.

As for the speed I said it a few times already (and even posted a link to the PEX8632 spec) that it is PCIe 2.0. Expecting PCIe 3.0 support from a $25 card is a little bit too much.
Do you mind posting pics of the adapter you made? I'm usually afraid to make my own adapter for fear of sending the wrong amount of current.
The last post in this thread suggests it's a CPU limitation? Unless he's mistaken.
https://forums.servethehome.com/index.php?threads/dell-r720-nvme-sadness.8543/
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
He is talking about another separate limitation.

The power adapter is very simple. I used Dell original power cable P/N 123W8 which uses standard color code. So connect all black wires to the ground and all yellow ones to 12V (ignore the grays - Dell motherboard uses them to sense presence of the backplane) and that's all to it.
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
Is there any specific (keep it linux cli, this is a headless proxmox node) benchmark you want me to run? Like a specific fio command?
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
I was surprised to find out that the disk cage (just a piece of steel) is the most expensive part of the whole kit.

YPNRC PCIe extender card - $20
123W8 8-pin power cable - $10 (1 cable cut in half is good for 2 DIY adapters)
4V75P 4x PCIe data cables - $15
693W6 4x NVME backplane - $25

but the 4x2.5" drive cage for R720/R820 is still selling for over $100!
so I just got the drive cage for T620 instead:
5TX8J 4x2.5" cage for T620 - $30
Dell Gen12 2.5" disk tray - $4ea

I run some benchmarks and each of the drives tops out at just about 1400MB/s - close enough to the obvious PCIe 2.0 x4 limit (theoretical 2000MB/s minus overhead). Which is lower than the same drives can reach connected through M.2 adapter but still higher than single port SAS3. So I'd say it was worth the amount (~$100 total shipped) I paid for it.

If anyone wants to use this config in a desktop system - make sure to provide adequate cooling. The drives will get baked without airflow.
 
Last edited:

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
GY1TD uses PEX8734 (32-Lane, 8-Port PCI Express Gen 3 (8 GT/s) Switch).
The pinout is not compatible with cheap SFF-8643 to SFF-8639 cables used with M.2 adapters. So you will have to use the whole Dell kit again. Which increases the cost even further. Which makes it not that interesting to me at the moment. Unless someone can figure the switch config to remap the lanes to make it compatible with the cheap cables.
 

redeamon

Active Member
Jun 10, 2018
291
207
43
BeTeP- Thank you for all this info.

Quick question:
On a r630/r730 the bus is 16x and the bios supports 4x4x4x4 bifurcation so why is a PLX chip necessary? Can't they just run 4x ports directly without the chip?
 
Last edited:

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
Quick question:
On a r630/r730 if the bus is 16x and the bios supports 4x4x4x4 bifurcation then why is a PLX chip even necessary? Can't they just run 4x ports directly without the chip?
If your Rx30 supports bifurcation, then you shouldn't need a PLX chip. These cards were originally designed for prior generation servers (Rx20 for example) that didn't have bifurcation.
 

redeamon

Active Member
Jun 10, 2018
291
207
43
If your Rx30 supports bifurcation, then you shouldn't need a PLX chip. These cards were originally designed for prior generation servers (Rx20 for example) that didn't have bifurcation.
I agree, but unless I'm mistaken I've never seen a breakout card without the PLX chip... which is why I'm confused.

i.e.,
GY1TD DELL POWEREDGE SERVER SSD NVMe EXTENDER EXPANSION CONTROLLER CARD | eBay

for
DELL POWEREDGE R730xd 24SFF +2SFF BAREBONE CHASSIS 2x HS 2x PSU 2x PSU 331FLR | eBay
 
  • Like
Reactions: anoother

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
I do not have any Dell servers myself but if I were to make an educated guess I'd say it is required to implement proper hot-swap feature.
 
  • Like
Reactions: redeamon

Al.Poida

New Member
Nov 1, 2021
1
0
1
I can confirm that it is using PEX 8632. And it does work in any standard (i.e. non Dell) PCIe 2.0 x16 slot.

There is one thing though. It seems that not all YPRNC are the same. I mean that not all of them are configured as x16 to x4x4x4x4. Some of the boards came out of different Dell PCIe kits and can be configured differently like x16 to x8x4x4 for example. And as usual the resellers would not know/care.

lspci output for x16 to x8x4x4 variant:
Code:
lspci -d 10b5:8632:0604  -vv | grep LnkCap:
                LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkCap: Port #4, Speed 5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #5, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #6, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
lspci output for x16 to x4x4x4x4 variant:
Code:
lspci -d 10b5:8632:0604  -vv | grep LnkCap:
                LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkCap: Port #4, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #5, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #6, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #7, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
Can I ask You two questions about sas expander Dell YPNRC.
1. The cable from the store Aliexpress SFF-8087 to 4*Sata fits to it? (Mode HBA Sata 16*HDD)
2. How to find out the operating mode output for x16 to x8x4x4 or output for x16 to x4x4x4x4 if taken separately without a kit Dell?
There is no such information anywhere at all!
Thanks in advance!
 

ceorl yip

New Member
May 4, 2017
9
0
1
with this expansion setup, can windows boot from the nvme ssd (uefi)? can you create virtual disk under controller bios, do raid config...?
asking for a r820, looking to populate the second cage.

thanks
 
Last edited:

Mithril

Active Member
Sep 13, 2019
354
106
43
Are there any cables for this card or the Gen3 one GY1TD or P31H2 that convert from dell's pinout to a more standard one, such as the standard SFF-8643, or U.2 ?
 

anoother

Member
Dec 2, 2016
133
22
18
34
Are there any cables for this card or the Gen3 one GY1TD or P31H2 that convert from dell's pinout to a more standard one, such as the standard SFF-8643, or U.2 ?
Hmm, is the pinout for GY1TD/P31H2 not standard? I was looking to use the WYNC0 backplane that I think matches with these, but with a different HBA...
 

Mithril

Active Member
Sep 13, 2019
354
106
43
Hmm, is the pinout for GY1TD/P31H2 not standard? I was looking to use the WYNC0 backplane that I think matches with these, but with a different HBA...
According to people on this forum and elsewhere no, it's dell shenanigans pinout at least for the older one that uses the sas2 (rectangle) cables such that a sas 2 to sas 3 (SFF-8643) cable connecting to say one of the M.2 to SFF-8643 adaptors that DO work with standard SFF-8643 to U.2 cables do not work.
I THINK (but could be wrong) the newer PCIE gen3 card and backplane also use non standard pinout and cables, to the point where someone said even with the card and backplane you needed dell's cables.

Since I see you mentioned HBA, do note that as far as I know both the older and newer of these are ONLY for PCIe (U.2) drives and will not support sas/sata drives as far as I know. Even without pinout/cable issues I don't know if they would support hybrid HBA that are PCIe and SAS either, and I feel like putting a sas/sata drive in here would be a waste even if it was supported since you'd be tying up 4 drives worth per drive.
 

anoother

Member
Dec 2, 2016
133
22
18
34
Yeah, by HBA I meant a bifurcating PCIe adapter... I have a couple with PLX switches (including a P31H2) but was pleasantly surprised to see the R730 supports bifurcation on all slots.

Re. cables, I'm wondering if the difference (with gen3) is just the standard one of "normal"/SAS SFF-8643 cables missing a refclock signal and/or greater signal integrity required for PCIe. I suppose I could test this by plugging a U.2 drive directly into the P31H2; Just need to find one I don't care about too much :D
 

Mithril

Active Member
Sep 13, 2019
354
106
43
Yeah, by HBA I meant a bifurcating PCIe adapter... I have a couple with PLX switches (including a P31H2) but was pleasantly surprised to see the R730 supports bifurcation on all slots.

Re. cables, I'm wondering if the difference (with gen3) is just the standard one of "normal"/SAS SFF-8643 cables missing a refclock signal and/or greater signal integrity required for PCIe. I suppose I could test this by plugging a U.2 drive directly into the P31H2; Just need to find one I don't care about too much :D
I feel like I've used "normal" SFF-8643 from a U.2 to 8643 to a 8643 to PCIe slot before but maybe I'm dreaming? :D
 

Mithril

Active Member
Sep 13, 2019
354
106
43
Yeah, by HBA I meant a bifurcating PCIe adapter... I have a couple with PLX switches (including a P31H2) but was pleasantly surprised to see the R730 supports bifurcation on all slots.

Re. cables, I'm wondering if the difference (with gen3) is just the standard one of "normal"/SAS SFF-8643 cables missing a refclock signal and/or greater signal integrity required for PCIe. I suppose I could test this by plugging a U.2 drive directly into the P31H2; Just need to find one I don't care about too much :D
Did you ever try this? If not can you get some high res pictures of which pins are and are not connected on the card? we could compare to "standard" cables perhaps.