PCIe switches, "Multicast" & drive mirroring.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rumpelstilz

New Member
May 27, 2022
6
0
1
I haven't dealt with dedicated PCIe switches before, and I'm trying to make sense of a feature that is advertised with almost all of them: "Multicast"


1695166210977.png
(from PEX 8749 datasheet)


As an example, let's say I have two PCIe x4 NVMe drives in a software Raid1/mirror setup, so the drives would normally take up x8 PCIe lanes to the CPU.

If these mirrored drives were instead connected via a PCIe switch with x4 host upstream port, with both drives configured as x4 endpoints, would they suffer any performance penalty compared to a native PCIe x8 lane connection? (ignoring the latency penalty of using a PCIe switch)
Since drive mirroring is a redundant operation, does Multicast effectively cut the "real" PCIe lane & bandwidth requirement in half?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
I haven't dealt with dedicated PCIe switches before, and I'm trying to make sense of a feature that is advertised with almost all of them: "Multicast"
...
Since (two-)drive mirroring is a redundant operation, does Multicast effectively cut the "real" PCIe lane & bandwidth requirement (in half)?
Yes (it would seem so). A better case-example might be the PEX8712, a 3-port, 12-lane switch. Configured for 2x NVMe drives, it would have (only) x4 lanes upstream and 2x of x4 lanes downstream; and, with an implementation of multicast, writing to a RAID1 (or RAID0) would be twice as fast, and reduce memory subsystem (DMA) overhead by 50% (vs. w/o multicast).

The real "devil" is in the details required for actually realizing/implementing the multicast functionality, in practice (either in an OS kernel, or HBA firmware).
 
Last edited:
  • Like
Reactions: rumpelstilz