Best Practices? for PCIE slot selection to optimize perf with dual/multi proc motherboards...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pipe2null

New Member
May 25, 2021
14
3
3
I have been kicking around this question in my head for a while, but I don't know enough of the low level details to know one way or another and I have limited hardware at the moment so ability to experiment is also limited. So, I thought I'd see what more knowledgeable people think:

The over-simplified question is:
What is the best strategy to use, in terms of overall system performance, when choosing which PCIE slots to install specific PCIE cards on dual/multi proc motherboards, assuming all cpu sockets are populated?

The simple answer is "it depends on what you are using it for", but I don't know what info to keep in mind when making judgement calls. I suppose there are a few larger considerations (I am unfamiliar with) like which cpu is handling which ISRs, between which cards the bulk of data is transferred between, whether or not data transfer between cards is better when both cards are on the same cpu or on different cpus?

For the sake of argument, I think "best practices" would apply to any multi proc configuration, but for illustration purposes here is the grab bag of hardware I'm currently messing with. Assume each variation of left side riser is available, same with single/dual/quad M.2 NVMe adapter cards as needed:
Motherboard: Supermicro X10DRU-i+, with SAS compatible backplane, full support for slot bifurcation
CPU1:
- Hardwired peripherals: (4x) 10GbE, ISATA, SSATA, etc...
- PCIEx8 (internal)
- PCIEx16
CPU2:
- PCIEx8 (low profile)
- Depends on riser: Either 2x PCIEx16, 1x PCIEx16 + 2x PCIEx8, or 4x PCIEx8

PCIE cards and misc:
- x16 - GPU(s)
- x8 - SAS HBA (can use internal slot, only installed if necessary)
- x8 - ConnectX3, dual port QSFP (could potentially use internal slot)
- multiple x4: Various M.2 NVMe's, assume at least one with good perf and at least one cheapi, or add more if it makes sense (note, a $6 single M.2 adapter works very nicely in the internal x8 slot without bracket)
- various SATA and/or SAS spinny HDDs, ssd's

So, for "best practices", is it better or worse to:
- Install connectx3 card on same cpu as M.2 NVMe's and SAS HBA?
- Install GPU on a different CPU than the bulk of storage cards/controllers?
- Install all network related periferals on one cpu and all storage related on the other?

I could go on and on, but I think that's enough illustration for the question. Thoughts?