Dell YPNRC questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

josh

Active Member
Oct 21, 2013
615
190
43
Seems like an interesting card, supporting 4x NVME drives on the R720.

Would this work in a non-Dell server?
Does it have PLX or does it require bifurcation?

Thanks!
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Its just a SAS Bridge it should work in any system, that being said it's also a x8 card so not sure if you get a full x4 pcie lanes per drive (even though it requires a x16 slot from the looks of it).
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
This is a PCIe extender card. Using SFF-8087 connectors does not make it SAS.
Most instances of that card I could find its referenced as a SAS expander bridge so :shrug: when it came out the pcie ssd were fairly new and didn't have what has turned into the U.2 distinction
 

josh

Active Member
Oct 21, 2013
615
190
43
This is a PCIe extender card. Using SFF-8087 connectors does not make it SAS.
Yup, it's for PCIe based drives. I had the same misunderstanding earlier until I did more research. Very curious about it being used in other systems other than the R720 and R820 it was made for.
 

josh

Active Member
Oct 21, 2013
615
190
43
Its just a SAS Bridge it should work in any system, that being said it's also a x8 card so not sure if you get a full x4 pcie lanes per drive (even though it requires a x16 slot from the looks of it).
Where did you see that it does x8 only? Everything suggests x4x4x4x4
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Whats Everything?
Only thing I can find on given sellers sites is it saying its 8x (albeit not super trustworthy on specs all the time)
I cant find anything on Dell's site directly referencing the model either or anything the other direction that indicates its 4x4.

Dell PowerEdge YPNRC 0YPNRC PCI-E SSD 4-Port SAS Bridge Expander Card R720 | eBay
NEW Genuine Dell PowerEdge R720 R820 PCI-E SSD Drives 4-Port SAS Bridge Expander Card YPNRC 0YPNRC CN-0YPNRC - Newegg.com
Genuine Dell PowerEdge R720 R820 PCI-E SSD 4-Port SAS Bridge Expander Card YPNRC | eBay
https://www.rakuten.com/shop/partscomusa/product/1e7-303036000192278/
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
Don't ever go by sellers' descriptions - most of them know shit about the products they sell. The reason you did not find anything about "YPNRC" on dell.com is because it was never sold as a separate product - only as a part of Dell's PowerEdge PCIe Express Flash SSD kit.
 

josh

Active Member
Oct 21, 2013
615
190
43
This is basically it. Most sellers are recylers not official resellers. I came across a post on this forum that seems to suggest it does x4 on 4 drives but I lost it.

You seem pretty clear about the product, do you have experience working with this card?
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
The company I used to work for as a software developer was using these in their build servers. But I was not personally involved in any server administration - so just user experience.

I have to admit that the idea of picking up the card for about $20 is slightly tempting. But then I would need to buy the rest of the kit - at very least a set of the original data cables (I vaguely remember reading about failed attempts to use regular SAS cables instead) and the backplane. Then I would need to make a custom power cable for the backplane. All together is going to set me back probably around $120. At that point I might get the cage and the trays as well. It would be still relatively inexpensive for a proper PLX solution but slightly more than I am prepared to throw away in case it would not work.

Also I do not know what PCIe switch the board uses but my understanding is that it is PCIe 2.0. So the x4 throughput is going to be limited at about 1500MB/s (PCIe 2.0 uses 8b/10b encoding, so about 20% overhead).
 
Last edited:

josh

Active Member
Oct 21, 2013
615
190
43
The company I used to work for as a software developer was using these in their build servers. But I was not personally involved in any server administration - so just user experience.

I have to admit that the idea of picking up the card for about $20 is slightly tempting. But then I would need to buy the rest of the kit - at very least a set of the original data cables (I vaguely remember reading about failed attempts to use regular SAS cables instead) and the backplane. Then I would need to make a custom power cable for the backplane. All together is going to set me back probably around $120. At that point I might get the cage and the trays as well. It would be still relatively inexpensive for a proper PLX solution but slightly more than I am prepared to throw away in case it would not work.

Also I do not know what PCIe switch the board uses but my understanding is that it is PCIe 2.0. So the x4 throughput is going to be limited at about 1500MB/s (PCIe 2.0 uses 8b/10b encoding, so about 20% overhead).
By using these do you mean the R720s or just the card itself with the whole setup of cables and backplane?
There was a post on this forum that mentioned using the V2 chips on the board would solve that PCIe 2 problem. The V2 version would automatically switch to PCIe 3. But you do confirm that these cards have PLX on them and I can just stick them into regular PCIe slots?
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
They were using the kits with R820 servers. The servers were purchased with the kits preinstalled. I do not know with 100% certainty whether the board has a PLX on it. That's what I have been told at the time but I have not seen it with my own eyes.
 

josh

Active Member
Oct 21, 2013
615
190
43
They were using the kits with R820 servers. The servers were purchased with the kits preinstalled. I do not know with 100% certainty whether the board has a PLX on it. That's what I have been told at the time but I have not seen it with my own eyes.
These are really tempting in that case. Would have to figure out how to fill the 4 empty bays in the Dell cage though.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
But you do confirm that these cards have PLX on them and I can just stick them into regular PCIe slots?
I can't confirm it either, but I do know that everything I've been able to find says that none of Dell's Gen 12 servers support bifurcation. I got one recently, so I'd been looking. But, this looks promising. I hope that if someone tries it, they post the results here.

It's murky though, exactly when bifurcation support came in. Gen 14 definitely seem to have it, but I've read a few posts from people claiming Gen 13 (Rx30) did too.
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
I can confirm that it is using PEX 8632. And it does work in any standard (i.e. non Dell) PCIe 2.0 x16 slot.

There is one thing though. It seems that not all YPRNC are the same. I mean that not all of them are configured as x16 to x4x4x4x4. Some of the boards came out of different Dell PCIe kits and can be configured differently like x16 to x8x4x4 for example. And as usual the resellers would not know/care.

lspci output for x16 to x8x4x4 variant:
Code:
lspci -d 10b5:8632:0604  -vv | grep LnkCap:
                LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkCap: Port #4, Speed 5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #5, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #6, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
lspci output for x16 to x4x4x4x4 variant:
Code:
lspci -d 10b5:8632:0604  -vv | grep LnkCap:
                LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkCap: Port #4, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #5, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #6, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
                LnkCap: Port #7, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <4us
 
Last edited:

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
I have 4x Intel DC P3520 connected to the YPRNC in a test system (aka bunch of parts piled up on top of my desk). But I need to reclaim that desk space back by Monday. So if anyone got more questions - now is a great time to ask.



Code:
lspci -t
-[0000:00]-+-00.0
           +-01.0-[01-06]----00.0-[02-06]--+-04.0-[03]----00.0
           |                               +-05.0-[04]----00.0
           |                               +-06.0-[05]----00.0
           |                               \-07.0-[06]----00.0
           +-19.0
           +-1a.0
           +-1c.0-[07]--
           +-1c.4-[08]----00.0
           +-1c.5-[09]----00.0
           +-1d.0
           +-1e.0-[0a]--
           +-1f.0
           +-1f.2
           \-1f.3
Code:
lspci -Dnnd 10b5:8632:604
0000:01:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:04.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:06.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
Code:
lspci -Dnnd ::108
0000:03:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:04:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:05:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:06:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
 
Last edited:

josh

Active Member
Oct 21, 2013
615
190
43
I have 4x Intel DC P3520 connected to the YPRNC in a test system (aka bunch of parts piled up on top of my desk). But I need to reclaim that desk space back by Monday. So if anyone got more questions - now is a great time to ask.



Code:
lspci -t
-[0000:00]-+-00.0
           +-01.0-[01-06]----00.0-[02-06]--+-04.0-[03]----00.0
           |                               +-05.0-[04]----00.0
           |                               +-06.0-[05]----00.0
           |                               \-07.0-[06]----00.0
           +-19.0
           +-1a.0
           +-1c.0-[07]--
           +-1c.4-[08]----00.0
           +-1c.5-[09]----00.0
           +-1d.0
           +-1e.0-[0a]--
           +-1f.0
           +-1f.2
           \-1f.3
Code:
lspci -Dnnd 10b5:8632:604
0000:01:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:04.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:06.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
0000:02:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8632 32-lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8632] (rev bb)
Code:
lspci -Dnnd ::108
0000:03:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:04:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:05:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
0000:06:00.0 Non-Volatile memory controller [0108]: Intel Corporation DC P3520 SSD [8086:0a53] (rev 02)
What is the full hardware setup? Did you need the cage? Do you need bifurcation?
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
I am using the switch card + set of cables + backplane. The cage is not important for PoC testing so I did not get it. But I would definitely recommend using the cage in any production setting. As for bifurcation - the whole point of using YPRNC is not to rely on motherboard's bifurcation support. It is not needed.
 

josh

Active Member
Oct 21, 2013
615
190
43
How do you power the backplane without the cage? Do you have a list of parts? Does it handle full PCIe 3 speeds?
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
The cage has nothing to do with powering up the backplane. It's just a piece of steel to hold the drives in place.
The backplane needs 12V through a Molex MicroFitJr3.0 8-pin connector - I just made an adapter from the Dell original cable.

As for the speed I said it a few times already (and even posted a link to the PEX8632 spec) that it is PCIe 2.0. Expecting PCIe 3.0 support from a $25 card is a little bit too much.