Why such a poor Epyc Workstation MB lineup?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Frank173

Member
Feb 14, 2018
75
9
8
Does anyone know whether any motherboard maker is planninga dual Epyc motherboard that features 6 pcie x16 connectivity? 128 pcie lanes, 96 of which fully usable and not a single workstation or motherboard that offers it.

In contrast the Supermicro X11DPG-QT for xeons looks absolutely enticing. I urgently need a board where I can use 4GPU for AI training and 2 other x16 slots, one for a 100gbe network card and a nvme cluster card. But most Xeon chips I have seen with more than 10 cores are too slow. I was really looking forward to the new EPYC 16 core CPU that can boost to above 4ghz but where are the motherboards? The epyc MB lineup is absolutely disappointing so far. Can we expect any fireworks in Q1/2019?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Optimizing EPYC for air cooled GPUs means you get 4x slots usable as a maximum in a standard form factor (EATX or ATX.)

I think Rome is more interesting for what you want. PCIe Gen4 and the I/O chip will be a big upgrade over the 4x NUMA node Naples for this type of application. If you want PCIe Gen4, you will wait for Rome motherboard spins. Those will cost ~$100 more for the higher-end PCB.

Servers have this area well covered.
 

Frank173

Member
Feb 14, 2018
75
9
8
Well, I hear your optimism re Rome but couple of your points are not really applicable to GPU compute systems anymore, especially when using Nvidia NVLink technology. The new TITAN RTX cards that are about to come out can be linked via a 100 GB/sec bridge, hence it does not really matter how the GPUs communicate with each other via PCIe (as you yourself pointed out in an earlier article, Xeon Skylake-SPs expose PCIe lanes via different points on the mesh and hence suffer from similar GPU-GPU communication bottlenecks). PCIe4 "only" provides about 64GB/sec bidirectional throughput. So, I have no idea how Rome is gonna improve life for the GPU compute crowd.

Would you not admit that it begs curiosity why not a single (non PLX) Epyc board is capable of exposing 4 or more x16 PCIe lanes into which double width cards can be installed? I am not sure how PCIe Gen4 is gonna solve or improve on this issue? It is a space issue not a throughput issue. I am saying this because the following Xeon board clearly proves that it can be done on a workstation board:

P.S.: All we have seen re Rome motherboards so far looks EXTREMELY disappointing. More PCI x16 lanes on board but they are horribly spaced, preventing double width cards (if skipping some lanes 4 double width cards can be installed and nothing else). Below Xeon board shows that 4 double width cards fit PLUS 2 full x16 slots for 1 full length and 1 half length card are still available which is AWESOME. Why can't this be done on an Epyc based board? I see zero technical limitations.



Optimizing EPYC for air cooled GPUs means you get 4x slots usable as a maximum in a standard form factor (EATX or ATX.)

I think Rome is more interesting for what you want. PCIe Gen4 and the I/O chip will be a big upgrade over the 4x NUMA node Naples for this type of application. If you want PCIe Gen4, you will wait for Rome motherboard spins. Those will cost ~$100 more for the higher-end PCB.

Servers have this area well covered.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
We have the MEZ01-CE0 in the lab, MZ01-CE0 (rev. 1.0) | Server Motherboard - GIGABYTE B2B Service

There you can see four double-width cards.

Rome has two public bits:
  1. PCIe Gen4. Not just for GPU to GPU, but also for GPU to NIC. It is one fewer PCB hop in EPYC Gen2
  2. I/O hub will lower average DRAM to PCIe transfer latency
There are a few more.

That board is 15.12" x 13.2" which is a non-standard form factor. There is not enough room in 12" motherboards to fit that many PCIe slots in a similar manner plus the CPU with 8x DIMM slots and rear I/O.

On the Xeon side, there is a big enough market to support these types of designs. AMD is still not there. One other part about Rome is that it should increase AMD market share. That means you have more units shipped and everyone will have to re-spin for Gen4 meaning a design cycle. That will help get more corner case products to market.
 

Frank173

Member
Feb 14, 2018
75
9
8
The Gigabyte board exposes 4 double width slots but nothing else. 32 available lanes will remain unused. As mentioned for many users a requirement for a workstation GPU platform is 4 double width card PLUS 1-2 additional x16 slots for networking cards or the like. No Epyc board without PLX trickery offers that. Skylake based boards do. Kudos for your point re GPU to NIC, that is potentially a plus for GPU compute in a server cluster.

But while you might be right on point re PCIe lane space on standard sized boards why does an end user cares whether the board is standard sized or not, as long as casings exist that fit into a workstation. (note that we talk here about workstations, not server rack casings). So, in the end your last paragraph is probably the one and only real reason for the lack of availability. It is NOT technical limitations but lack of market share. That was the whole intent I wanted to direct the discussion to, namely, that it is not a technical issue but a demand issue. And my take is that as long as such boards do not exist it is impossible for demand to pick up in this specific segment. It really does not matter how many Epyc boards are out there that serve the storage space or network market. It has zero bearing on demand for GPU compute solutions. Either Epyc board makers look to serve this segment or not. The only way to push up demand in this segment for GPU compute boards is to make GPU compute boards that showcase compelling hardware designs. So, in that I disagree with your notion that simply waiting for overall AMD demand to pick up will produce more Epyc GPU compute targeted boards. The only way is to actually make some. Even the Epyc server boards are a huge turn off for GPU compute work. Gigabyte's server board uses PLX switches to hook up 2 GPU slots on each PCIe lane. Whats the point when there is no additional PLX switch that connects the other PLX switches? In the end it again goes through the bottlenecked CPU fabric. Sigh...

We have the MEZ01-CE0 in the lab, MZ01-CE0 (rev. 1.0) | Server Motherboard - GIGABYTE B2B Service

There you can see four double-width cards.

Rome has two public bits:
  1. PCIe Gen4. Not just for GPU to GPU, but also for GPU to NIC. It is one fewer PCB hop in EPYC Gen2
  2. I/O hub will lower average DRAM to PCIe transfer latency
There are a few more.

That board is 15.12" x 13.2" which is a non-standard form factor. There is not enough room in 12" motherboards to fit that many PCIe slots in a similar manner plus the CPU with 8x DIMM slots and rear I/O.

On the Xeon side, there is a big enough market to support these types of designs. AMD is still not there. One other part about Rome is that it should increase AMD market share. That means you have more units shipped and everyone will have to re-spin for Gen4 meaning a design cycle. That will help get more corner case products to market.
 

epycftw

New Member
Jul 1, 2021
5
0
1
I'm going to resurrect this and ask again, cos it's now two and a half years later so maybe:

Is there an extant single socket EPYC workstation motherboard with 4 d o u b l e s p a c e d x16 pcie slots so it doesn't cover up the rest?

My dream plan is 4 x MI100's with a 7443P for a little 1600W desktop supercomputer but it kills me to have to cover up pcie lanes that if rerouted would give me room for an infiniband card (i have some FDR cards and a 36 port switch cheap off ebay) and another GPU to run my desktop on :D

If not, I'm going to prevent and call it a tragic market failure that nobody can make my dream come true :/
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
Is there an extant single socket EPYC workstation motherboard with 4 d o u b l e s p a c e d x16 pcie slots so it doesn't cover up the rest?
The (e)atx specification is limited at 7x pcie slots with a width of 20mm, 8x pcie slots (or 4x doublewidth pcie slots) would require a proprietary form factor.
So far I have seen only supermicro mainboards with non standaed formfactors, but these are for special chassis and riser boards.
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
To be clear: you want FOUR PCIe 4.0 X16 slots, spaced two slots apart. PLUS another PCIe slot that remains accessible when filling these 4 slots with dual-slot GPUs?
That's really not easy to fit into any common form factor.
 

epycftw

New Member
Jul 1, 2021
5
0
1
Why does it have to fit into a common form factor? Whatever form factor the dual Xeon board pictured in December 2018 (third post this thread) is, would work. Just do that, but with EPYC? Why is that apparently impossible?
 

bayleyw

Active Member
Jan 8, 2014
291
95
28
Why does it have to fit into a common form factor? Whatever form factor the dual Xeon board pictured in December 2018 (third post this thread) is, would work. Just do that, but with EPYC? Why is that apparently impossible?
The X*DRG-Q line looks slick on paper but isn't actually very good, it is very challenging to fit in a case and four GPUs next to each other is thermally...poor. If you are going to go with datacenter cards there are solutions like this but I'm sure that is a $10K+ system.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,478
113
It's not impossible, just low demand. The pictured motherboard (Supermicro X11DPG-QT) is proprietary in size and intended to be sold as part of a workstation barebone. It will not fit in a standard EATX chassis. If you want 128 lanes of PCIe 4.0, you can get a SYS-740GP-TNRT.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
never say never when it comes to this stuff! pm me for details: btw, it also supports milan!

1626401084474.png
 
  • Like
Reactions: Tha_14

epycftw

New Member
Jul 1, 2021
5
0
1
never say never when it comes to this stuff! pm me for details: btw, it also supports milan!

View attachment 19295
Aww way to get my hopes up.

It's using their S8030GM2NE motherboard which doesn't allow 4 double slot GPU, *plus* others. Using a double slot in the top one, mechanically covers up the other. Epyc is poorly served by its workstation motherboard lineup compared to Intel/Xeon. I just want to run 4 double slot gpu's and not cut off 3 other pcie slots :/
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
they do mention the following:
 AMD EPYC 7002/7003 Processor w/ cTDP TBD  Memory  (8) DDR4-3200 DIMM slots (8 memory channels)  PCIe Expansion Slots  (5) PCIe Gen 4.0 x16 slots  (1) PCIe Gen 4.0 x8 slot  Supports up to four double-wide GPU cards + one additional card on top

Not sure how they do it as I also have a couple of those same mb which only have the 5 gen4 x16 slots. unless they are using 1 of the slimsas nvme x8 connectors with an adapter type cable. They also only spec 2 x u.2 backplane whereas the mb supports 4 x u.2 drives. that would explain why the workstation only supports 2 u.2 drives if they are utilizing the other connector for that extra pci slot that you were looking for :)
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
If i were to go 4gpu in a desktop system not rack i would for sure go custom loop route and then you could single slot cards