I am looking at an mid-tier AM5 board (something like Asus Prime X670-P) and am trying to understand if 24 PCIe lines will be enough - I am really tired of having the struggle on X99 and my i7-5820k and would like to avoid this in the future, thus hoping someone can confirm or explain better if this is how it's working?
To give some further context - I don't care about PCIe 5.0, 4k gaming, crossfire, copper-based LAN, wifi, sound or USB 3.2+. My work revolves mostly around (software) engineering with casual gaming sprinkled here and there (stuff like Deus Ex MD, X-Com, Stellaris or Elite: Dangerous). I'd like to retain my existing 1080Ti as it serves me well. Ideally, I'd be looking for a board that can host a modern graphics card, a pair of NVMe SSDs, a 10/25 G fibre NIC and have some PCIe expansion capacity without having to turn off anything that's already connected over PCIe.
From what I understand, no GPU actually uses (or cares for) anything more that PCIe 3.0 x8. More precisely, what I am referring to is the actual necessity of a given (consumer) GPU for anything more than PCIe 3.0 x8. Electrically, these are wired to be PCIe 3.0 x16, but if you halve the port lanes, FPS rates will be impacted minimally.
State-of-the-art consumer SSDs are mostly PCIe 4.0 x4 and the rest of the expansion componentry should be more than ok with anything that's equal to PCIe 3.0 x4/x8.
Given that, with an AMD board [like the Prime X670], I should be able to:
- populate the 1x PCIe 4.0 x16 with GPU
- populate the 1x PCIe 4.0 x16 with a 10G+ fiber optics NIC
- still have one extra slot fully functional for, say an expansion card with 4x M.2 NVMe SSD slots
- still have at least two M.2 NVMe SSD on-board slots functional even with everything populated
Questions:
- are my assumptions above true or false?
- do I need to free up any of the lanes by downgrading the, e.g. GPU slot to PCIe 4.0 x8, in order to have everything functional at the same time
- can you suggest a better motherboard than the one I had considered?
- if a given adapter requires less speed (e.g. a single-port 10G NIC requires a PCIe 2.0 x8), I would assume that it will occupy that number of lanes and no more? To extend the example, a PCIe 2.0 x8 == PCIe 3.0 x4 == PCIe 4.0 x2, thus even though the NIC is plugged-in to the PCIe 4.0 x16 port, I would expect it to use only x2 (i.e. two lanes) thus leaving 14 from that port available. Does lane occupancy get determined like that or is there more that I don't know?
NB, there is a motherboard that does all of this, and it's of course the Asus WRX80 Sage but if at all possible I'd like to avoid having to jump to the expensive Threadripper PRO - even though I am changing workstations roughly once every 7 or 8 years (on average).
Any thoughts/suggestions are more than welcome - appreciate your time and TIA.
To give some further context - I don't care about PCIe 5.0, 4k gaming, crossfire, copper-based LAN, wifi, sound or USB 3.2+. My work revolves mostly around (software) engineering with casual gaming sprinkled here and there (stuff like Deus Ex MD, X-Com, Stellaris or Elite: Dangerous). I'd like to retain my existing 1080Ti as it serves me well. Ideally, I'd be looking for a board that can host a modern graphics card, a pair of NVMe SSDs, a 10/25 G fibre NIC and have some PCIe expansion capacity without having to turn off anything that's already connected over PCIe.
From what I understand, no GPU actually uses (or cares for) anything more that PCIe 3.0 x8. More precisely, what I am referring to is the actual necessity of a given (consumer) GPU for anything more than PCIe 3.0 x8. Electrically, these are wired to be PCIe 3.0 x16, but if you halve the port lanes, FPS rates will be impacted minimally.
State-of-the-art consumer SSDs are mostly PCIe 4.0 x4 and the rest of the expansion componentry should be more than ok with anything that's equal to PCIe 3.0 x4/x8.
Given that, with an AMD board [like the Prime X670], I should be able to:
- populate the 1x PCIe 4.0 x16 with GPU
- populate the 1x PCIe 4.0 x16 with a 10G+ fiber optics NIC
- still have one extra slot fully functional for, say an expansion card with 4x M.2 NVMe SSD slots
- still have at least two M.2 NVMe SSD on-board slots functional even with everything populated
Questions:
- are my assumptions above true or false?
- do I need to free up any of the lanes by downgrading the, e.g. GPU slot to PCIe 4.0 x8, in order to have everything functional at the same time
- can you suggest a better motherboard than the one I had considered?
- if a given adapter requires less speed (e.g. a single-port 10G NIC requires a PCIe 2.0 x8), I would assume that it will occupy that number of lanes and no more? To extend the example, a PCIe 2.0 x8 == PCIe 3.0 x4 == PCIe 4.0 x2, thus even though the NIC is plugged-in to the PCIe 4.0 x16 port, I would expect it to use only x2 (i.e. two lanes) thus leaving 14 from that port available. Does lane occupancy get determined like that or is there more that I don't know?
NB, there is a motherboard that does all of this, and it's of course the Asus WRX80 Sage but if at all possible I'd like to avoid having to jump to the expensive Threadripper PRO - even though I am changing workstations roughly once every 7 or 8 years (on average).
Any thoughts/suggestions are more than welcome - appreciate your time and TIA.