You´re a lucky one with such a speed of ANM24PE16. I just tested Axagon PCEM2-ND ( https://forums.servethehome.com/ind...furcation.31172/page-3#lg=post-353379&slide=0 ) low-cost card with ASMedia chip on (allegedly working with PCIe Gen2 x4 in reality), and my results on my old Supermicro boards (X9SRH-7F and ancient C2SBA+ II.) was a bit disappointment. Speed with Samsung 970 Evo Plus NWMe SSD was only about 2,2-2,5x better than with my another Samsung 860 Pro SSD connected directly to onboard SATA (3 or 2 respectively). I don´t know if it could be due to lack of dynamic buffer pool. I tested it also during copying cca 70+ GB of data here and there and still without any significant (=still bad) speed changes , which I suppose should make some effect on (lack of) buffer.Thanks for the wealth of interesting and useful information posted here! After reading through the thread, I purchased the least expensive PCIe x16 -> 4x M.2 NVMe card with a PLX switch I could find on Aliexpress:
Ceacent ANM24PE16
I am using it to upgrade a 2010 MacPro 5,1. The NVMe SSD was previously attached by way of a single passive PCIe to M2 adapter, which peaked at around 1.7 GB/sec (sequential).
When I removed the drive from the old adapter and put it into the ANM24PE16, I was quite surprised so see that the transfer rate of the same SSD increased to 2 GB/sec which is about the maximum you expect to get out of four PCIe 2.0 lanes. I am wondering how this can be? Most people here are concerned that the switch chip would introduce additional latency, making the drive slower, but I am actually observing the opposite. Why can the same drive become faster when the data goes through the PLX-equipped adapter card as opposed to the directly connected passive adapter?
I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").
At any rate, I think it is a good card. The tiny fan is extremely noisy, but since the heatsink is massive, I will simply disconnect it as there is enough airflow from the Mac Pro's slot fan. I will do more testing, especially simultaneous transfers with more drives attached to the card.
Thanks for the swift reply. I'm still new to this niche technology and I'm trying to learn so I appreciate you humoring my question.It's a switch w/ lane aggregation... so any active devices gets 1/<active devices> bandwidth not an electrical separation of lanes. And bandwidth is however many lanes are available to the host side of the switch.
Well, most of those cards are PCIE 3.0. Meanwhile 5.0 is starting to hit mass market. There is simply not enough demand (and PCIE switch chips for PCIE 4.0 and 5.0 are much more demanding due to the higher bandwidth , thus more expensive). Today you can go with capacities up to 30TB with a single U.3, so not enough people want to stripe / „raid“ their NVMEs for bandwidth and/or capacity.
For few minutes long sequential write, probably. Otherwise spinners are just giving similiar performance except IO.Makes sense, but for a home user like me, I suspect 'low quality' NVME drives like the 4tb P3 will still be the best performance-per-dollar option to cram into a NAS haha.
For a "$ per TB" it can still make sense, especially if you are doing some level of parity, to do more drives around 4TB than fewer around 15/16TB. Not to mention getting used enterprise SSDs where the price delta per TB is quite largeWell, most of those cards are PCIE 3.0. Meanwhile 5.0 is starting to hit mass market. There is simply not enough demand (and PCIE switch chips for PCIE 4.0 and 5.0 are much more demanding due to the higher bandwidth , thus more expensive). Today you can go with capacities up to 30TB with a single U.3, so not enough people want to stripe / „raid“ their NVMEs for bandwidth and/or capacity.
Well if you do 2 mirror pools and you won't use them at the same time then still you're closer to full 2x 4x.Does this means each disk will work at full speed ? There are also cards for 8X PCIe slots that have 4 disk slot. To my understanding, those cards will not run the disks at full speed due to being 4 disk slots on a 8X PCIe slot, because each disk takes a 4X PCIe lane ?
Is my reasoning correct?
Thanks
for the x16 card:Also it is not clear how they connect 32 drives with only 2 ports.