PLX was the company (Acquired by Broadcom), that produced switching chips... They are called i.e. PEX8648 or PEX8748.. So the full name would be PLX PEX8648 - but for the latter, I am not sure if it was ever marketed by PLX, so it is probably just called Broadcom PEX8748 (but it is anyway the same, there is only one chip, and the proper name is PEX8748 irrespective of when it was releasedThe first one says it uses a PEX8749 controller, the second says a PLX8748 controller (although I assume this is Broadcom PEX8748?)
The PEX8749 chip has more features than PLX/PEX8748 . The notable differences are DMA (4 vs none), Non-Transparency (2 vs 1) and Port Count (16 vs 12). So, PEX8749 chip would definitely cost more. But about 2 times price difference may be also due to brand/make.I'm looking at the x16 cards, that support 8 x U.2 drives.
The OP has given two options - here and here - one is $277, the other is $135.
Does anybody know what the difference between them is? They seem pretty comparable - so not sure why the price difference.
The first one says it uses a PEX8749 controller, the second says a PLX8748 controller (although I assume this is Broadcom PEX8748?)
Got it - thank you!The PEX8749 chip has more features than PLX/PEX8748 . The notable differences are DMA (4 vs none), Non-Transparency (2 vs 1) and Port Count (16 vs 12). So, PEX8749 chip would definitely cost more. But about 2 times price difference may be also due to brand/make.
Hi all
I have recently ordered an LRNV9547-4I card directly from Linkrel in China.
This card utilizes a PLX 8747 chip with 48 lanes.
I paid 258 US$ plus shipping, which was just a few days to Switzerland via Fedex.
The card is working fine, tough at the moment I have only two m.2 drives to test with.
Installed it in a Lenovo/IBM x3550 M4 (1U) server. I had to switch the x8 riser card with a x16 card and to modify the server a little bit:
The CMOS battery was originally installed upright on the board and prevented the card to sit correctly. I removed it and soldered a new battery with attached wires (and a plug) to the board. Also, I had to remove a small pillar which was useless anyway to give room for the card.
The Lenovo BIOS does not have NVME drivers to boot from the m.2 drives. I tried injecting them. But afterwards, oniine flashing was not possible due to some verification failure. Flashing via SPI works, but the System was not booting. So I reverted back.
Anyway, I have some SATA SSDs isntalled with an LSI RAID card to boot from which is sufficient.
I would be super disappointed in any switch-based card which imposed more than 1-2% performance penalty on a drive's (raw/unswitched) performance.I'd be super curious to see benchmarks of how the drives perform. If they are within 5% of "direct" (not through the PLX) that would be great to know.
Are you considering performance in terms of bandwidth, latency, or both? My expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.I would be super disappointed in any switch-based card which imposed more than 1-2% performance penalty on a drive's (raw/unswitched) performance.
I've tested the ANU28PE16 (PEX8748 chip) (in a x16 slot) with[**] a SK Hynix P31 1TB, vs that same P31 via a direct adapter in that same slot. The only difference was in the random-4K-q1t1 test where the switch might have introduced a ~1% slowdown (the deltas were hard to differentiate between actual performance and test-sample variance).
[**] The P31 was connected to the switch-card via a SFF8463-to-SFF8639 50cm cable and a U.2-to-M.2 adapter case.
Right, "how much latency" both in cases where "lanes in equals lanes out" and in cases where we are driving say 4 NVMe on 8x lanes from the host. Thats the kind of metric I'd love to see, either directly or how it impacts "worst case" (small random IO).Are you considering performance in terms of bandwidth, latency, or both? My expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.
A random-4k-q1t1 test would likely be the most sensitive to latency in terms of overall performance, so if that's what you're measuring, that makes sense.
Actually, as many aspects of "performance" as I can think of [and have the hardware/ability/insight to explore]Are you considering performance in terms of bandwidth, latency, or both?
Completely agree. (In jest, it's a little "stinging" to hear my fave, the P31, referred to as high-latency, butMy expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.
Might we paraphrase, and say that the r4kq1t1 is the best ("conventional") test for exposing/highlighting an SSD's latency?A random-4k-q1t1 test would likely be the most sensitive to latency in terms of overall performance, so if that's what you're measuring, that makes sense.
Most of the X9 line has gotten BIOS updates that support bifurcation. The UI isn't the best (on my X9SRL you basically have to read the value backwards from what you would expect), but it does work.Will have to get a PLX switches for a couple of my X9s if I keep them, they all have x8 PCIe slots.
What is the point of having a switch that supports so many lanes when there are only x16 possible through that model's PCIe electrical connector?Hi all
I have recently ordered an LRNV9547-4I card directly from Linkrel in China.
This card utilizes a PLX 8747 chip with 48 lanes.
I appreciate the mention. I believe you on the E5 models, but I don't own any. My boards are all E3s, and none of them have received BIOS updates for several years, except the X9SPU-F that stopped working after 2020 unless I set the clock back to a prior year - I sent them a message and they made an "a" version to fix it (obviously the bare minimum).Most of the X9 line has gotten BIOS updates that support bifurcation. The UI isn't the best (on my X9SRL you basically have to read the value backwards from what you would expect), but it does work.
Most (dare I say all) NVMe and SATA flash based SSDs don't hit anywhere *close* to the fun "max transfer" speeds during the majority of their real-world operation. The difference between 99th percental "worst case" and the big marketing number can easily be two "0s", but you'll be seeing somewhere between the two most of the time. The "why" gets deep into the ins and outs of flash itself, file systems, latency, the OS and various software you are running (and more).What is the point of having a switch that supports so many lanes when there are only x16 possible through that model's PCIe electrical connector?