On the heels of watching patricks excellent video describing the architectural benefits of the newest AMD EPYC dual cpu platforms, i started wondering if on dual cpu populated Intel based systems (mainly SM x10 and x9 2011 v1/2/3/4 systems), if there is any thought that should be put into which cards go into which CPU X pcie slots.
I couldn't find any answers or questions on this so wanted to ask below-
(however there is this excellent article , for background, that discusses using 1 or 2x cpus on intel based MBs:
https://www.servethehome.com/answered-cpu-dual-processor-motherboard/)
Assume a X9 or X10 MB with 2x cpus installed, 2x HBA cards (LSI 2308 HBAs not Raid/IR) + 1x dual 10g intel x540 card- does it matter at all if i put both HBAs on CPU1 or CPU2 slots, and the 10g card on CPU1 or CPU2 slots? (ie should i be aiming to put all cards on CPU1 slots? or should i split them between CPU1 and CPU2 pcie slots?)
Or should i put all 3x cards on CPU2 slots , as i know the CPU1 is already using up some of its lanes for the onboard sata/sas ports + 4x GBe eth ports?
(or does any of this matter ?) - patricks AMD EPYC video made me think about this, as it seemed, the gist of the video is that newer AMD EPYC Dual CPU platforms have better/more efficient communications channels for CPU1 to CPU2 and CPUX to RAM/pcie communications. (which makes me think that Intel dual CPU inter-cpu comms, i think called "QPI" , isnt as robust thus thought should be applied on PCIe slot population).
any info/thoughts would be appreciated. Thanks!
I couldn't find any answers or questions on this so wanted to ask below-
(however there is this excellent article , for background, that discusses using 1 or 2x cpus on intel based MBs:
https://www.servethehome.com/answered-cpu-dual-processor-motherboard/)
Assume a X9 or X10 MB with 2x cpus installed, 2x HBA cards (LSI 2308 HBAs not Raid/IR) + 1x dual 10g intel x540 card- does it matter at all if i put both HBAs on CPU1 or CPU2 slots, and the 10g card on CPU1 or CPU2 slots? (ie should i be aiming to put all cards on CPU1 slots? or should i split them between CPU1 and CPU2 pcie slots?)
Or should i put all 3x cards on CPU2 slots , as i know the CPU1 is already using up some of its lanes for the onboard sata/sas ports + 4x GBe eth ports?
(or does any of this matter ?) - patricks AMD EPYC video made me think about this, as it seemed, the gist of the video is that newer AMD EPYC Dual CPU platforms have better/more efficient communications channels for CPU1 to CPU2 and CPUX to RAM/pcie communications. (which makes me think that Intel dual CPU inter-cpu comms, i think called "QPI" , isnt as robust thus thought should be applied on PCIe slot population).
any info/thoughts would be appreciated. Thanks!