How 'switchy' can PCIe switches be?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

abufrejoval

Member
Sep 1, 2022
39
10
8
I believe all ACQ10x chips have a PCIe 3.0 IP block and just like their ACQ113 brethren (PCIe 4.0 IP block), they offer flexible PCIe version and lane combinations as required to support the Ethernet data rate. The 5Base-T variant is offered with a PCIe 3.0 x1 slot, I haven't seen a 2 lane SKU.
Thanks for quoting me, but did you actually want to add something?
 

kpfleming

Active Member
Dec 28, 2021
392
205
43
Pelham NY USA
So here's a board that fits with the OP's concept: 8xM.2 NVMe sockets with a single PCIe x16 slots. When using PCIe Gen 3 it can't provide full bandwidth to the drives, but it can using PCIe Gen 4.

 

abufrejoval

Member
Sep 1, 2022
39
10
8
So here's a board that fits with the OP's concept: 8xM.2 NVMe sockets with a single PCIe x16 slots. When using PCIe Gen 3 it can't provide full bandwidth to the drives, but it can using PCIe Gen 4.

Yes, perhaps.

But then I'll never know, because while OWC wastes my brain cycles quoting lofty business tautologies like "Government agencies, companies, and institutions need the ability to reliably access and quickly analyze enormous data files to make decisions", but it takes great care to hide what you're actually getting.

This Apple-Speak wants me to lash out and share the hurt it's causing me for having to filter through tons of crap only to find that the information I need is missing.

Sorry for that...

As far as I can tell (after looking at the pictures in the manual), it seems a design somewhat similar to the Highpoint Tech PCIe switch board.

And, somewhat unusual for a Mac product, it's about half of Highpoint Tech's list price, ...currently.

It delivers 2:1 oversubscription vs a bifurcation board that costs €50 for €600, which can be worthwhile in certain use cases.

But it would be more interesting if it were to deliver 8x NVMe M.2 to 8 lanes of PCIe 5.0 or even to a single PCIe 5.0 M.2 x4 connector at 4:1 oversubscription.

For x8 that gives you 30GB/s of bandwidth and beyond that DRAM becomes a bottleneck if you want even minimal processing from the CPU on the data the SSDs transfer. It would allow you to use relatively economical PCIe 3.0 NVMe drives to fully aggregate their bandwidth: there is no need to go for speed on individual M.2 drives if the switch allows you to aggregate their bandwidth!

16 lanes of PCIe 5 is nearly 64GByte/s of bandwidth, currently too much to waste on any single device in a workstation. Even an RTX 4090 currently tops out at half that bandwidth.

I'd consider a balanced solution giving 50% of that to a GPU and sharing/switching the other 50% between storage, network or any other I/O.
And the Ryzen 7000 at its base is perfectly equipped to handle that with basically 7 sets of PCIe 5.0 x4 bundles to go around.

The problem is that GPUs, 100Gbit NICs and NVMe JBODs all want fixed 16 lanes each and won't trade them for speed via PCIe revisions.
And if you want to support that without any speed limitations you'd need at least a 48 port PCIe 5.0 switch chip that costs more than the entire CPU--if you can buy it--just to talk to the IOD...

What a mess!
 
  • Like
Reactions: llowrey