Can I escape ThreadRipper PRO with AM5?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Pakna

Member
May 7, 2019
50
3
8
Great discussion, excellent info here.

This board more or less maxes out what AM5 can do. The x16 root complex is bifurcated into x8 + x8, then 4 of the remaining 8 CPU lanes are routed to a third slot. The other 4 go to a pretty useless PCIe 5.0 M.2 slot.
This board would've been great, had it cost half as much as it does. The more I am looking at the existing AM5 offerings, the more I am starting to be convinced how unsuitable and cost ineffective it is for a lot of people that aren't gamers or don't really require ultra-workstations that would've been rendering farms perhaps a decade ago. We driven into paying for a ton of stuff we don't really need - such as USB 3.2/4 and its ilk, Thunderbolt, copper Ethernet, WiFi, sound, useless M.2 drive slots (as you point out), not to mention the absolutely insane number of VRM phases (the MSI MEG has 22 VRM phases. Twenty-two!!). On the obverse, we don't get the stuff we do need - such as increased PCI lanes. I'd have looked at the server boards, but given the relatively low airflow in desktop cases, I am not sure I would get 5-7 year trouble-free life span out of them.

Speaking of VRMs, other boards don't seem to be a whole lot better - Asus Strix X670E-E has 18 of them, Asus Prime 14. B650E offerings have 12 and even that should run a decent overclock of 7950X, should one be inclined to try that out. What fraction of percentage of users require 18 VRM phases? For what kind of overclocking? Insanity.

Truth be told, I was settling into the idea of swallowing all the above and going with Asus Strix X670E-E but I just don't see the value in buying such a severely maimed and imbalanced platform. Intel is even worse with how dated its offerings are with > 24 lanes. Looks like I'll be scouring the eBay for something like a E5-2699 v3, shove it into my Sabertooth X99 and call it another year - perhaps next year AMD's Storm Peak might finally nudge me over the edge, if yesterday's rumour is anything to go by.
 

odditory

Moderator
Dec 23, 2010
384
68
28
Great discussion, excellent info here.



This board would've been great, had it cost half as much as it does. The more I am looking at the existing AM5 offerings, the more I am starting to be convinced how unsuitable and cost ineffective it is for a lot of people that aren't gamers or don't really require ultra-workstations that would've been rendering farms perhaps a decade ago. We driven into paying for a ton of stuff we don't really need - such as USB 3.2/4 and its ilk, Thunderbolt, copper Ethernet, WiFi, sound, useless M.2 drive slots (as you point out), not to mention the absolutely insane number of VRM phases (the MSI MEG has 22 VRM phases. Twenty-two!!). On the obverse, we don't get the stuff we do need - such as increased PCI lanes. I'd have looked at the server boards, but given the relatively low airflow in desktop cases, I am not sure I would get 5-7 year trouble-free life span out of them.

Speaking of VRMs, other boards don't seem to be a whole lot better - Asus Strix X670E-E has 18 of them, Asus Prime 14. B650E offerings have 12 and even that should run a decent overclock of 7950X, should one be inclined to try that out. What fraction of percentage of users require 18 VRM phases? For what kind of overclocking? Insanity.

Truth be told, I was settling into the idea of swallowing all the above and going with Asus Strix X670E-E but I just don't see the value in buying such a severely maimed and imbalanced platform. Intel is even worse with how dated its offerings are with > 24 lanes. Looks like I'll be scouring the eBay for something like a E5-2699 v3, shove it into my Sabertooth X99 and call it another year - perhaps next year AMD's Storm Peak might finally nudge me over the edge, if yesterday's rumour is anything to go by.
Yep, join the club. The lack of modern offerings in the HEDT/WS segment has been a frustration for many of us going on years now. PCIe lanes in desktop class AM5/Z790 is still absolutely anemic, and then there's a massive void before you're suddenly at 128 lanes with a TR Pro/WRX80. And so the goldilocks zone of +/- 64 lanes combined with high frequency single core IPC so you can have your compute+storage all in one box continues to be elusive. And the idea of paying retail launch prices for a TR Pro 5000 CPU + WRX80 MB now at the end of its cycle and TR Pro 7000 right around the corner, also seems as exciting as putting on a pair of dirty underwear.

And so like many of us you have a choice to either split up your compute and storage to separate machines, or just keep waiting.
 

odditory

Moderator
Dec 23, 2010
384
68
28
Just to add some info that isn't written out explicitly: PCIe lanes don't work like memory allocation where you have a bunch of lanes and you can split them up however you like. They are hard-wired, and the only thing you can do is 'disable' lanes on a slot, not 'move' or 'add' them.

This means that the electrical connections are your upper limit, and any shared (so: double connected) lanes between the physical slots will be going via a chip that can cut connections. It will then have some firmware configuration somewhere that defines which lanes are cut from a slot, and it usually has very few profiles, like 16 lanes and 2 slots will maybe have (as written earlier) a x0/x16, x8/x8 and maybe x16/x0 if there is some reason to do that for physical slot locations (i.e. cooler overhang that might be 'above' or 'below' the slot). This sometimes also means that one slot is reversed (wrt. lane Ids) depending on the lane configuration chip used (if not done on the root complex itself). That is so that switching an entire bank of lanes will always only 'remove' the last 8 lanes so you don't end up with 8 lanes on the wrong end of the slot.

Depending on where the switching is done (CPU, Chipset, lane switcher, or actual PCIe switch), the amount of lanes, the firmware, the slots, the physical topology and features like bifurcation and bus pausing this all gets somewhat complicated.
On-point post, and will unfortunately go over many peoples' heads since PCIe lane mapping is a tedious and murky topic, can vary wildly between motherboards since Intel/AMD leave a lot of 'creative' room to AIB's for configuration, and some AIB"s don't even make the exact configuration available.

I'm seeing the freakouts on other forums with people that bought a top-end Z790 believing they're getting the best-of-the-best, only to wonder why the brand new GPU in the first X16 slot is running at x8 as soon as they plug an SSD into M.2_1, losing 4 CPU-connected lanes in the process. And assuming it "must be a BIOS bug".

I would have bought a ASUS Z790 Extreme, but because its M.2_1 is wired for Gen5 and thus commingled with the first X16 slot, I instead went Z790 Hero specifically for the M.2_1 wired with 4xGen4 lanes that don't interfere. Same scenario existed with Z690.

I think AIB's overestimated how quickly Gen5 NVMe SSD's would become available and relevant, and also assumed that since "current GPU's can't saturate x8 Gen4 lanes anyway, users won't care" - which isn't accurate since there are still differences x8 vs x16 with a RTX4090 for example, even if small. AIB's also wanted the "Super duper Gen5 NVMe!" crap on boxcovers and marketing materials.

TLDR avoid high-end Z690/Z790 mb's with a Gen5 M.2_1 slot. I haven't scrutinized X670/X670E enough to identify what compromises were made there but I'd probably avoid MBs with Gen5 M.2 slots there too.
 

CyklonDX

Well-Known Member
Nov 8, 2022
835
272
63
keep note that often wiring is not done to support signaling x8 wide on those pcie x4 slots (even those are x16 in length) so even if its pcie5 with x4 if wiring is not made to fully support x8/x16 wide you won't get speeds of even pcie 3.0 x16 (but be stuck on pcie3 x4 regardless.)
This is a worry on any mobo.


I would recommend msi meg x670e ace, just how the pcie lanes are laid out.
You get proper lane sharing on 2 pcie's (so both can run at x8 when present), and 3rd one is just x4 that is also not shared with any other m.2 or device. Thus they are fully usable for whatever you want to put there. (keep in mind if wiring isn't there you may not get more than x4 from it - any gen.)

Next up, it comes with 4x m.2's
1x that goes off cpu with pcie 5.0 x4 support
3x that goes off chipset 4.0 x4 m.2's ~ do not share any lanes with other devices either.

Thus i think you should be fine with this board. Question comes are the cpu's enough for you in terms of cores...

(big con its expensive, but this pcie config you should be looking at. Potentially just look for x670 with pcie gen4 only)
 

DaveLTX

Active Member
Dec 5, 2021
169
40
28
keep note that often wiring is not done to support signaling x8 wide on those pcie x4 slots (even those are x16 in length) so even if its pcie5 with x4 if wiring is not made to fully support x8/x16 wide you won't get speeds of even pcie 3.0 x16 (but be stuck on pcie3 x4 regardless.)
This is a worry on any mobo.
Well, I am seeing a lot of errors here
First, the chipset lanes are PCIe Gen 4. If its Gen 5 means its coming from the CPU, of which there are 8 additional lanes aside from 16 usual ones which can be bifurcated into x4/x4/x4/x4 by the way (Only AMD allows it, Intel doesn't. x8/x4/x4 is also a supported configuration on boards that have support for it)
If you stuck a Gen 3 x8 device into a Gen5 x4 slot if the motherboard routed the additional lanes it will run at Gen3 x4 because... the device only does Gen 3. And the lanes are x4 available only. If signaling integrity isn't made to support G5, it will never support gen 5 regardless of what device.
Also those slots tend to be x16 physically THEN x4 electrically. (only the pins) You will never get x8 out of them if that's what you meant. it becomes a device limitation on the PCIe generation end
 

CyklonDX

Well-Known Member
Nov 8, 2022
835
272
63
maybe i didn't wrote it clearly.

if pcie x4 slot is to transfer data on x4 lanes it ends at 32 pins physically on port.
1) on pcie x4 there are 16 pins available for transmitting data, and total of 32 pins
2) doesn't matter what gen it is in this case.
3) Its questionable on any mobo if they fully wired slots for 49 pins to support x8 lanes, or 82 pins to support x16 lanes | on 3rd port labeled as pcie gen(whatever) x4

Now if wiring is only done to 32 pins, and card is lets say gen3 x8 then you won't get x8 gen3 speeds at all, on your pcie gen4/5 x4 port.
It will still only transmit on x4 (32 pins slot) as there are only 16 pins physically available for data signaling, and your card only supports gen3 signaling.

(To overcome that you would need a PLX chip raiser card lets say x4 gen4/5 that supports devices x8/16 lane wide - that plx chip would be fully able to take advantage of newer gen signaling, and fully feed your x8/x16 lane card of older generation.)
 
Last edited:

DaveLTX

Active Member
Dec 5, 2021
169
40
28
maybe i didn't wrote it clearly.

if pcie x4 slot is to transfer data on x4 lanes it ends at 32 pins physically on port.
1) on pcie x4 there are 16 pins available for transmitting data, and total of 32 pins
2) doesn't matter what gen it is in this case.
3) Its questionable on any mobo if they fully wired slots for 49 pins to support x8 lanes, or 82 pins to support x16 lanes | on 3rd port labeled as pcie gen(whatever) x4

Now if wiring is only done to 32 pins, and card is lets say gen3 x8 then you won't get x8 gen3 speeds at all, on your pcie gen4/5 x4 port.
It will still only transmit on x4 (32 pins slot) as there are only 16 pins physically available for data signaling, and your card only supports gen3 signaling.
You will hardly find any x4 electrical slot wired with 49 pins or 82. They are exceedingly rare in two cases, one the bifurcation ability allows it or the manufacturer is extremely lazy
 

CyklonDX

Well-Known Member
Nov 8, 2022
835
272
63
yep that's what i meant,

even tho it looks like x16 82pin wired slot
1671580853762.png

its likely only wired to 32pins only ( 4 lanes ) anyway.
(As its cheaper and easier to do so)
 

mattventura

Active Member
Nov 9, 2022
447
217
43
Even if it's not possible to take Zen4's Gen5 16+4+4 and turn it into 16+8, I'd honestly settle for just a PLX chip. One of those PCIe 5.0x4 could become 4.0x8 or 3.0x16 without overprovisioning.

After doing some more research, there's two things I'm annoyed with:
1. Sites like PCPartpicker and a lot of shopping sites seem to take "I want a board with 2 x16 slots" as "I want a board with 2 physical x16 slots but they could be electrical x1".
2. The notion that we shouldn't bother with anything larger than an x8 off the chipset or push for more CPU lanes because consumers won't make use of it seems a little odd when board makers are squeezing 4-5 NVMe slots on their board as if a typical consumer is going to use all of those. Why not give us an x8 slot off the chipset with the option to bifurcate x4x4 for the people that would rather have 2 more NVMe drives?
 

DaveLTX

Active Member
Dec 5, 2021
169
40
28
Even if it's not possible to take Zen4's Gen5 16+4+4 and turn it into 16+8, I'd honestly settle for just a PLX chip. One of those PCIe 5.0x4 could become 4.0x8 or 3.0x16 without overprovisioning.

After doing some more research, there's two things I'm annoyed with:
1. Sites like PCPartpicker and a lot of shopping sites seem to take "I want a board with 2 x16 slots" as "I want a board with 2 physical x16 slots but they could be electrical x1".
2. The notion that we shouldn't bother with anything larger than an x8 off the chipset or push for more CPU lanes because consumers won't make use of it seems a little odd when board makers are squeezing 4-5 NVMe slots on their board as if a typical consumer is going to use all of those. Why not give us an x8 slot off the chipset with the option to bifurcate x4x4 for the people that would rather have 2 more NVMe drives?
1) PLX chips are not available yet on Gen 5 and have been too expensive to implement for gen 4 mainstream boards in general as well
And a PLX won't turn Gen 5 into Gen 3 magically if its a Gen 3 PLX, it has to be Gen 5 on the input
2) Just because it has 4-5 NVMe slots doesn't mean they use all the bandwidth at the same time.
why no x8 slot? Because there isn't enough HSIO to do that usually without compromising on more for the general public. Most want more nvme slots, not a bifurcated x8 slot.

For example, I worked on a few tyan FT83A-B7129 and the motherboard cost for the PLX extension on top cost MORE than the rest of the system without the extension and that was only gen 3!

Realistically a <5% userbase doesn't really make sense to cater to
 
  • Like
Reactions: edge and odditory

odditory

Moderator
Dec 23, 2010
384
68
28
1. Sites like PCPartpicker and a lot of shopping sites seem to take "I want a board with 2 x16 slots" as "I want a board with 2 physical x16 slots but they could be electrical x1".
Its because the manpower required to research how slots are configured electrically, which may require intensely studying block diagrams that aren't even always available - would make it a nonstarter. Easily a fulltime research job for 1-2 people. And that's not the site's business model since 99.9% of users wouldn't value the information to warrant the cost, and don't care.

Therefore auto-scraping specs from manufacturer product pages is the only feasible way to offer any info at all, and we gotta do our own homework.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
... why no x8 slot? Because there isn't enough HSIO to do that usually without compromising on more for the general public.
Maybe you're referring to the AM5 platform ... does it actually allow for x8, via HSIO?

On the Intel [WZ][67]xx, it's because the chipset itself, by design, ONLY allows for (soft-strapping of its PCIe lanes to) x1, x2, or x4. There was one Intel chipset, early PCIe3 days, that did allow for config'ing x8. While that chipset only had x4 gen3 interconnect to the CPU, that x8 still had value, because it allowed for full-bandwidth use of one more PCIe x8 gen2 card (which is what everyone had a pile of, at that time). [On the [wz][67]xx,] this is especially a shame, since the interconnect is gen4 x8 (and now we've got piles of gen3 [and a few gen4] x8s).]
======
"It's deja vu all over again."
 

mattventura

Active Member
Nov 9, 2022
447
217
43
Maybe you're referring to the AM5 platform ... does it actually allow for x8, via HSIO?

On the Intel [WZ][67]xx, it's because the chipset itself, by design, ONLY allows for (soft-strapping of its PCIe lanes to) x1, x2, or x4. There was one Intel chipset, early PCIe3 days, that did allow for config'ing x8.
I'm not sure, but I've also looked for AM5 boards and they seem to have the same issue - 16x or 8x8x main slots, but nothing else bigger than an x4.

1) PLX chips are not available yet on Gen 5 and have been too expensive to implement for gen 4 mainstream boards in general as well
And a PLX won't turn Gen 5 into Gen 3 magically if its a Gen 3 PLX, it has to be Gen 5 on the input
True, I was judging it based more off of gen3 prices and assuming gen4/5 prices would fall sooner than they actually would. But I also wonder if a simple 2-port "bridge" rather than switch would be any cheaper (hard to research this when googling "PCIe bridge" just pulls up PCIe to PCI(-X) bridges).

and now we've got piles of gen3 [and a few gen4] x8s
CMIIW, but in Mellanox-land, I'd have to go all the way up to connectX-6 to get gen4. Shame, because 3-series dual 40GbEs are down to $25, but tossing that in a x4 slot turns it into a 32gb NIC.
 

DaveLTX

Active Member
Dec 5, 2021
169
40
28
I'm not sure, but I've also looked for AM5 boards and they seem to have the same issue - 16x or 8x8x main slots, but nothing else bigger than an x4.


True, I was judging it based more off of gen3 prices and assuming gen4/5 prices would fall sooner than they actually would. But I also wonder if a simple 2-port "bridge" rather than switch would be any cheaper (hard to research this when googling "PCIe bridge" just pulls up PCIe to PCI(-X) bridges).
PLX is not a switch in the same sense as such. A switch can switch from 2 x4 bifurcated slots to a full x8 but that will depend on the HSIO being capable of delivering a full x8 slot as x4/x4 can't be combined that easily from a chipset.
Besides as I said, 5% of a market is not worth charging extra everyone for, if it actually works.

For eg, the fixed blocks on most PCH allowed only blocks of x4 at max. Any more would require more hardware to implement bifurcation down to x1 which is necessary for wifi, 1GbE, LPC bus, SuperIO etcetc

Notice how the x16 slots usually bifurcate down to x4/x4/x4/x4 at most? that's why
For intel its only x8/x8 because they have less hardware to bifurcate on mainstream CPUs while the IP block of Zen IODs are lifted right off the server chips as they use the same IP to save on development time
 
Last edited:

unwind-protect

Active Member
Mar 7, 2016
416
156
43
Boston
Truth be told, I was settling into the idea of swallowing all the above and going with Asus Strix X670E-E but I just don't see the value in buying such a severely maimed and imbalanced platform. Intel is even worse with how dated its offerings are with > 24 lanes. Looks like I'll be scouring the eBay for something like a E5-2699 v3, shove it into my Sabertooth X99 and call it another year - perhaps next year AMD's Storm Peak might finally nudge me over the edge, if yesterday's rumour is anything to go by.
It is ridiculous how the "desktop" platforms stuck with the same limitations for so long:
- max 128 GB RAM. I was hoping per-module capacity would double with DDR5
- very few PCIe lanes
- no boards ever for registered RAM
- meanwhile mainboard prices have exploded for no real value in return

You would think that they try to coerce people into upgrading to a more expensive platform. Only they are not providing such a platform.
 

CyklonDX

Well-Known Member
Nov 8, 2022
835
272
63
You, and most people here are not target users of desktop platforms.
They are stuck because there isn't a requirement for a desktop platform to have more.

You offer them speed over capacity. No single desktop application (including games) uses anywhere near 24GB of ram.
(there are some but they are outliers and typically a workstation type rather than meant to run on desktop)

The support of 128GB is already overkill.
2nd GPU? Overkill
ecc ram is supported on some desktop platform - overkill
overpriced boards? whales buy them - lets see how many we can sell. Profit ++


(A workstation from 2013 can have 2x v2 14c, 768GB of ram, lot pcie lanes, you could get 6-8 pcie gen3 x8...)
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
If you have an x99 board, just swap that 5820k for an $18 5930k on ebay and then you'll have 40 lanes and can fully populate your pci-e slots with no issue.

If you want more performance, get a used 6950k (my current cpu) and then you have a much better core count and can play any modern stuff without any issue.

I currently have:
1 onboard m.2
1 RTX 3080
1 10Gig nic
1 expansion card with extra m.2s
1 pci-e SSD.

-- Dave
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
If you have an x99 board, just swap that 5820k for an $18 5930k on ebay and then you'll have 40 lanes and can fully populate your pci-e slots with no issue.

If you want more performance, get a used 6950k (my current cpu) and then you have a much better core count and can play any modern stuff without any issue.

I currently have:
1 onboard m.2
1 RTX 3080
1 10Gig nic
1 expansion card with extra m.2s
1 pci-e SSD.

-- Dave
I'm waiting for the 6950K to drop in price for my old x99 currently running a 1620 v3 :D :D may just throw a 2667 V3 in there...