need a recommendation for motherboard..

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
i'm looking for a motherboard recommendation. now with new AMD Zen CPUs providing plenty of PCIe lanes, i'm wondering if there are better options that meet my requirements, which are:

1) at least 7 (more would be better) PCIe slots that are x8 wide or larger
2) low power consumption (prefer single socket)
3) fit within $500 budget? (for both motherboard + CPU)

I don't mind used equipment, and prefer it. I don't need a lot of RAM, even 2GB would be plenty. I don't need a lot of CPU, heck i'd be okay with single core if such thing still existed. I mainly need lots of PCIe slots and low power consumption.

Is there any combination of used AMD Ryzen or Epyc stuff that can do this?

This will be replacing my current combination of Supermicro X8DTH-iF with single L5630, which has worked great for this, and cost me next to nothing, but there are occasions where I wish I had PCIe 3.0. I had to rule out anything Supermicro X9 or newer due to not enough PCIe slots, or requiring dual sockets (double power consumption), and/or beyond budget. Just wondering if there are better options with newer AMD stuff?
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Some backstory might help what do you need all the lanes for?
i do a lot of firmware flashing on PCIe cards, like over 100+ cards at a time. booting up and shutting down takes a lot of time, so the more PCIe slots i have, the larger the batch of PCIe cards I can flash in one pass. time costs me money, so the more i can reduce the amount of time needed, the better. The machine can be on for 12+ hours a day, and several days at a time when in operation, so low power consumption helps. naturally, the programs i run require little CPU and RAM.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
  • Like
Reactions: cactus

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Edit: cesmith beat me to it
Do you just need 8x slot power? or the full bandwidth for card testing?
If just power, perhaps look at a mining style board/enclosure and pcie extensions, you could get something like 18 1x slots on those.
 
  • Like
Reactions: cesmith9999

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
That's interesting, but unfortunately won't work. I need to test that all x8 PCIe lanes are working. It is one of the most common failures and I test for it.

Or you can just have a bunch of PCIe expansion chassis
https://www.amazon.com/StarTech-Express-Slot-Expansion-System/dp/B000UZL1GC/ref=sr_1_1_sspa?crid=3TJU9WKTBGZ8E&keywords=pcie+expansion+chassis&qid=1559595386&s=gateway&sprefix=PCIe+expansion,aps,230&sr=8-1-spons&psc=1

Is there a specific reason that you need more than 1x PCI slot when you are flashing PCIe cards?

Chris
Yeah, see above.

The external PCI expansion chassis is interesting... but how does that work? what do i connect to? is there some sort of PCIe over (some interface) bridge technology?
 

RageBone

Active Member
Jul 11, 2017
617
159
43
I havn't stumbled upon any AM4 board that has maybe more then 3 slots in bigger then by8.
Basically all X370 and x470 boards have 3 by 16, and the rest is by1 or by4 if your lucky.
On the Zen2 Ryzen 3000 and x570 side, where not much is currently known, it looks similiar.
I really hope, the x570 chipset gets some PLX like capability that we can get more 3.0 lanes out of it, so that i can put two GPUs with by8 electrical, by8 electrical Mellanox Nic, and a by 4 Video-capture-card + nvme drives into my next AM4 WS.

So guess it has to be SP3 / TR4 to fully satisfy our demands.
TR4 only takes Udimms and maybe reaches 5 slots but not 7 in the >= by8 category.

SP3 Epyc it should be then, but here ends my knowledge about the used market. havn't seen anything good jet here in germany.
Maybe i'm looking in the wrong place.

The only setup that i know and could reasonably build with enough slots and possible "low Power" and horse-Power in the budget is an:
Asus X99E-WS ??? or
Asrock X99 WS
With any E5 V3 or V4 CPU.
My current favorite would be the e5 2628L V4 ES QHV8 which won't run on those boards without a bios mod because A0 silicon.
That CPU is currently about 60$ from japan and even if the board is 400$ new, still in the budget.
Those boards should be falling towards 200€/$ used and maybe 300 "new" if your lucky, so well inside budget.

Both Boards have 7 by 16 slots with PLX chips that even allow peer to peer traffic in case that mattes.
So you can run by 16,8,8,8,8,8,8 or 16,0,16,0,16,0,16 electrically. (On the Asus i know that for sure!)
You could pair that with any ddr4 you like, even udimms and LRDimms should work.

But X99 has the obvious spectre, meltdown, zombieload and ME problems and arguments against it.

Ha, just looked at my Fujitsus, even a sub 150$ D3348 B?? its the pcie-slot requirement.
Not electrically but physically. 3 x 8 and 4 x 16 slots.

just saw a SM X10SRi-F, 6x by8 and 1x by16 physical.

Does fujitsu make epyc boards ? Since they don't change features, an epyc board could look identical.
The Previous Sandy / ivy bridge Board gen from fujitsu was the same if i remember correctly, so you could go D2348 B?? if i remember the part number correctly.
Asus X79E-WS would apply here too, but lets put that aside.


Searching epyc boards, the:
Gigabyte MZ3X ..
ASRACK EPYCD8 (-2T)
are the only ones with 7 slots.
The supermico H11SSL and Asus KNP ... only have 6.

I favor the ASRACK DE8 to be honest, but that is 359$ on newegg, the D8 T2 is 429$.
That doesn't leave much room for a CPU in that budget.

EDIT 2 with prices:
Let me just build a list of Boards with those 7 slots:
intel lga2011-0:
Fujitsu D3128
Asrack epc602d8a ebay new: about 250€
Asus x79e ws ebay used: 400€

intel lga 2011-3:
Asus X99E WS ebay used currently: 350€
Asrock X99 WS not available on ebay.
Fujitsu D3348 BXX easy 100€ could sell you one. starting 120 on ebay used.
supermicro X10SRI-F about 200 on ebay

intel lga 2066:
- i don't want to do this

intel lga 3647:
- could be interesting, but other then the DCI interface, i have no reason to like this platform. The rather expensive coolers have to be thought of too.

AMD SP3:
Gigabyte MZ3X ..
ASRACK EPYCD8 (-2T) 359 (430) from newegg
 
Last edited:

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Last edited:
  • Like
Reactions: cesmith9999

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
i'm looking for a motherboard recommendation. now with new AMD Zen CPUs providing plenty of PCIe lanes
Ryzen is the cheap one, but primarily because it doesn't have a lot of PCIe lanes available; you'd need to go to Threadripper or Epyc to have any hope of using all the slots at 8x, and even s/h I think a first-gen threadripper setup would likely cost you more than $500 (but I'm a brit and don't know what the US s/h market is like). As RageBone points out, there's only two boards out there with 7x PCIe slots and you're unlikely to find them s/h at all.

P.S. I assume the PCIe cards you use aren't hot-pluggable?
 
  • Like
Reactions: RageBone

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
@RageBone thanks for the detailed reply... lots of interesting models you mentioned. although, i guess finding 7 PCIe x8 slots is still hard to find.

@Spartacus that X9DRX+-F definitely sounds interesting. I didn't know such a thing existed! challenge will be to find a case that works for me...

thanks guys... you guys definitely dug up some options I didn't know about so I have some homework to do... appreciate all the responses!
 
  • Like
Reactions: RageBone

zir_blazer

Active Member
Dec 5, 2016
355
128
43
Have you considered PCIe Hotplug? I know that it is theorically possible, the problem is usually the mechanical side. You would need a specific Case that can allow you to shut down power to a PCIe Slot and do safe removal. Sadly, that is enterprise level stuff usually.


You could also consider doing it via an external reprogrammer. Even if the ROMs are soldered, you may be able to get around with a clamp.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Have you considered PCIe Hotplug? I know that it is theorically possible, the problem is usually the mechanical side. You would need a specific Case that can allow you to shut down power to a PCIe Slot and do safe removal. Sadly, that is enterprise level stuff usually.


You could also consider doing it via an external reprogrammer. Even if the ROMs are soldered, you may be able to get around with a clamp.
I'm not that familiar with PCIe hotplugging, although I've tried it a few times with various cards. In some cases, I was able to tell Linux kernel to re-scan the pci bus and detect the device and load the driver. In other cases, it didn't work. And in worst case, i've ended up killing a PCIe card (would no longer turn on again). So, I guess that's deterred me from hotplugging, but maybe if you're familiar with it you can educate me on what's needed for it to work? You mention enterprise level stuff, would the current X8DTH-iF Supermicro server board be in that category or do you mean something else by "enterprise level"?
 

zir_blazer

Active Member
Dec 5, 2016
355
128
43
I'm not that familiar with PCIe hotplugging, although I've tried it a few times with various cards. In some cases, I was able to tell Linux kernel to re-scan the pci bus and detect the device and load the driver. In other cases, it didn't work. And in worst case, i've ended up killing a PCIe card (would no longer turn on again). So, I guess that's deterred me from hotplugging, but maybe if you're familiar with it you can educate me on what's needed for it to work? You mention enterprise level stuff, would the current X8DTH-iF Supermicro server board be in that category or do you mean something else by "enterprise level"?
I'm not familiar with it, but I know that it is possible. Supposedly, the "proper" way to do this stuff is to tell the OS to unload Drivers, power down the card, remove it, plug a new one, power it on, then load the OS Drivers. Assuming Software support is there, it should work.

By enterprise level stuff, I mean the cases themselves. Check this. Your regular Motherboard provides standard PCIe Slots, which may work with hotplugging but are not safe for these scenarios. In systems with real PCIe Hotplug support, the PCIe Card doesn't go directly in the Motherboard but instead sits in a special container. It should be similar in nature to the powered PCIe risers that were used for mining cards but with a switch button to independently power it off.
 
Last edited:
  • Like
Reactions: Spartacus

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
PCIe hotplug is certainly possible, but from what I've read there's rather a lot of ducks to line up in a row before it works (namely hardware, kernel/driver and both the device and motherboard firmware) before it'll work, and for a great many devices this simply isn't a factor in their design; so even if it did get thought about and implemented, it might not be sufficiently tested.

I've only tried it once with an LSI (?) HBA and it worked, albeit after some futzing about with either /sys/bus/pci/rescan or /sys/bus/pci/devices/<addr>/enable.

Interesting post here from a purported HW engineer.
 

RageBone

Active Member
Jul 11, 2017
617
159
43
Linus Techtips did a video about it a while back. It was rather funny how much time he wasted instead of using his head and reading stuff, but i guess i that too. ****en Windows10 iscsi boot ......
Edit: Here it is:
Officially, there are ACPI specs for hot Plugging and those are kind of funny.
I mean, PCIe HotPlug and uHotPlug is no problem .
There is also a spec for CPU and RAM hotPlug.
RAM unHotPlug isn't supported though : ( Who would have thought.

PCIe unHotplug shouldn't need any hardware specific "buttons" to tell the system that the card will be removed soon.
At least in my opinion.

My "rig" recommendation would be the Asus X99 E WS setup with a cheap Haswell Xeon.
Used, if you "hunt" for them, the boards should go towards 200$ and you get 7 by 16 slots and it is one of the few single socket boards with about 80 Electrical Lanes, the 40 doubled with PLX chips.
And the Bios is better then that of the cheaper Fujistus and should support more things.
 
Last edited:

zir_blazer

Active Member
Dec 5, 2016
355
128
43
PCIe unHotplug shouldn't need any hardware specific "buttons" to tell the system that the card will be removed soon.
At least in my opinion.
Being able to power down the card is what makes the procedure safe. There is a reason why the enterprise gear has the swap button to begin with, unless you want to ocassionally kill a card.

May want to read this, check Surprise Removal. The PCIe Hotplug specification seems to include such button.