I have some older servers that I've added some NVME pcie cards to. The systems are X9DR3(i)-LN4F+ and are updated to the latest 3.4 bios/firmware. Currently with pcie3.0 I have an 8 lane pcie 3.0 card with two 1tb nvmes running at full speed (benched them at the same time and got 3.5gbps from each simultaneously or 7gbps total). I want to use these servers in an s2d cluster so I need to add another identical card. My question though is that the other available pcie 8x slot is wired to cpu2 where as the first is wired to cpu1. Is this going to create a massive amount of traffic on the qpi bus when doing heavy read/writes? Should I move other things around so both cards are married to the cpu1 pci lanes?