Recommend a motherboard with dual PCIe 5.0 x16 running at x16

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

safado

Member
Aug 21, 2020
44
6
8
I’m upgrading to 100gb with a recent purchase and after running SMF fiber to my office I’m wanting to add a 100gb card (minimum req of 50gb) but finding that my Asus ROG z690 Formula while it has 2x PCIe 5.0 x16 lanes it operates at x8/x8 and the PCIe 4.0 x16 lane operates at x4

Is there any platform out there that will meet the following requirements?

1) Dual PCIe 5.0 lanes running at x16/x16 or at least 1x PCIe 5.0 x16 and 1 PCIe 4.0 x16
2) PCIe 5.0 M.2
2) ATX or smaller form factor (really don’t want to go full ATX case). Similar to Fractal Design Define 7 Compact Mid Tower size would be perfect.

Cost isn’t a concern and ideally I wouldn’t have to move on from the 13th gen Intel but am willing to in order to meet the need. Thanks for all advice.
 

MountainBofh

Active Member
Mar 9, 2024
123
95
28
My guess is that the only boards that will do that are going to be the TRX50 series and none of them are normal ATX sized, but EATX instead.

You're going to have to either compromise on your case choice, or all the PCIe lanes you want.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
You're looking at a HEDT or workstation platform. None of the consumer CPUs have enough lanes to directly run 2 x16 slots and an M.2 (yours has 28 lanes). So you're looking at a Threadripper (TRX50) or Xeon-W (24xx or 34xx) most likely. Just be warned that your motherboard selection is a bit limited, as a lot of these are somewhere between ATX and E-ATX. You can also look at server CPUs, but then you're losing a lot of single-thread speed.

However, unless you're really concerned about stealing lanes from the GPU, PCIe 5.0 x8 is still more than enough for 100GbE. Even 4.0x8 or 5.0x4 is >100gb.
 
  • Like
Reactions: nexox

Tech Junky

Active Member
Oct 26, 2023
351
120
43
ASRock would be your best bet for not auto splitting the bandwidth.

Getting 32 lanes out of Intel though is going to be the challenge.
 

nexox

Well-Known Member
May 3, 2023
678
282
63
I think I would just drop a PCIe 4.0 NIC into your x4 slot and deal with ~64Gbps while you wait for platforms and NICs to improve.
 

safado

Member
Aug 21, 2020
44
6
8
You're looking at a HEDT or workstation platform. None of the consumer CPUs have enough lanes to directly run 2 x16 slots and an M.2 (yours has 28 lanes). So you're looking at a Threadripper (TRX50) or Xeon-W (24xx or 34xx) most likely. Just be warned that your motherboard selection is a bit limited, as a lot of these are somewhere between ATX and E-ATX. You can also look at server CPUs, but then you're losing a lot of single-thread speed.

However, unless you're really concerned about stealing lanes from the GPU, PCIe 5.0 x8 is still more than enough for 100GbE. Even 4.0x8 or 5.0x4 is >100gb.
This is exactly the conclusion I was arriving at. I think I’ll just run my GPU and x8 and watch to see what comes on the near horizon. Appreciate the advice.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
I think I’ll just run my GPU and x8 and watch to see what comes on the near horizon.
Most uses of a GPU don't move enough data from the CPU to the GPU over the PCI bus to be an issue.

A PCIe 4.0 x8 connection can transfer about 16GB/sec, which means you can fill/empty the GPU memory (of many cards) every 1.5 seconds. It wouldn't be easy to get that much data into main memory or back out of it in that amount of time.