I haven't dealt with dedicated PCIe switches before, and I'm trying to make sense of a feature that is advertised with almost all of them: "Multicast"
(from PEX 8749 datasheet)
As an example, let's say I have two PCIe x4 NVMe drives in a software Raid1/mirror setup, so the drives would normally take up x8 PCIe lanes to the CPU.
If these mirrored drives were instead connected via a PCIe switch with x4 host upstream port, with both drives configured as x4 endpoints, would they suffer any performance penalty compared to a native PCIe x8 lane connection? (ignoring the latency penalty of using a PCIe switch)
Since drive mirroring is a redundant operation, does Multicast effectively cut the "real" PCIe lane & bandwidth requirement in half?
(from PEX 8749 datasheet)
As an example, let's say I have two PCIe x4 NVMe drives in a software Raid1/mirror setup, so the drives would normally take up x8 PCIe lanes to the CPU.
If these mirrored drives were instead connected via a PCIe switch with x4 host upstream port, with both drives configured as x4 endpoints, would they suffer any performance penalty compared to a native PCIe x8 lane connection? (ignoring the latency penalty of using a PCIe switch)
Since drive mirroring is a redundant operation, does Multicast effectively cut the "real" PCIe lane & bandwidth requirement in half?