Supermicro X9DRX+-F and multi-GPU, PCIe power concerns, risers, etc.

gsrcrxsi

Member
Dec 12, 2018
53
3
8
Hello all, first post here on STH, but i've lurked quite a bit.

I participate in the SETI@home project, and I'm doing some brainstorming on a possible new build. So I would like to ask your opinions on some setups. I apologize if this belongs in another forum, but I am not sure which one would be a better fit.

Anyway, I have built a number of multi-gpu setups on SETI, most being only 2-3 GPU setups and are of little concern as it relates to PCIe power. i have another system with a normal consumer motherboard, 7 GPUs and "mining" style USB risers, which provide external power, so no issue with power there as well.

I'm thinking of a new 10-GPU setup using the Supermicro X9DRX+-F motherboard. it would not be used in a normal chassis most likely, but more as an open air setup. The reason for this choice is several fold. First, it supplies 10x PCIe 3.0 x8 slots. While SETI can run OK with just a single PCIe 3.0 lane (as is on my 7-GPU host), there is a slight slowdown vs having 8-16 lanes. the other thing i need to consider is that for best performance, i need at least one CPU-thread per GPU. my 7-GPU host is running on an i7-7700k with 8 threads, so even if i could attach more GPUs, performance would suffer past 8 GPUs. with the X9DRX, i would likely use something like 2x E5-2637v2 for 16 threads and all PCIe lanes.

my conundrum is as follows. I can't find any quality PCIe risers that provide BOTH external power, and 8 PCIe lanes. all of the risers that provide adequate power, only carry one shielded PCIe lane (via USB cable), and all the shielded risers that carry 8 lanes, do not have external power. I have only found ONE riser that seems to do both, but it is unshielded (not sure it would safely handle 3.0 signals), and of questionable quality. see here: PCI-Express PCI-E 8X to 16X Riser Card Flexible Ribbon Extender Cable w/Molex + Solid Capacitor

second option, using unpowered risers. which i could get, and let all cards pull their power from the PCIe slots. I have concerns that this board cannot supply enough power for 10x GPUs this way. this motherboard does provide 2x 8-PIN CPU power, as well as an additional 4-pin power connection. but from my reading, none of these provide power to the PCIe power plane, and are only for the CPU power. can anyone confirm that? the only connector that "might" would be that lone 4-pin connector, but the documentation doesnt really say where that power goes.i could also inject power to the PCIe ports on the 11th slot with something like the EVGA Power Boost, but again i dont know if this would supply enough extra power to satisfy the potential power requirements. see here: EVGA Power Boost

my ideal option would be to use a shielded 8x PCIe riser WITH power connection, but it doesn't look like one exists. if someone knows of one, can you link to it?

thanks :)
 

gsrcrxsi

Member
Dec 12, 2018
53
3
8
So I know this is a little old, but since no one replied, I bit the bullet and bought one of these boards anyway.

I could not find any shielded PCIe risers that had a power connection to the riser. So I am running 8 GPUs with 8-lanes to the board (and getting power from the board, as well as 2 GPUs on a USB riser with external power connections.

Powering the GPUs are 2x 1200W HP server PSUs and powering the motherboard, 2 GPU risers and peripherals is a Corsair 1000W.

it runs fine been running for about 6 months, but I wish shielded ribbon risers existed like this but with external power connections. I am using the previously mentioned EGVA power injector to the 11th PCIe slot.

pics:



 
  • Like
Reactions: Fritz

ari2asem

Active Member
Dec 26, 2018
566
94
28
The Netherlands, Groningen
i have built same systems for gpugrid.net and FAH.

i am using x16 -> x16 pcie risers of Li-heat.

what kind of risers are you using? because your SM-board has x8-slots.

this is my very first build (just noob build to gather more experience for morecomeing builds)

Imgur

4 gpu's, asrock x399 threadripper taichi. using 1 psu of 1200 watt (regular atx psu). and Li-heat x16 pcie risers

right now having all hardware to put rigs together in september


Li-Heat PCI-E Gen 3.0 Ribbon flexible Riser Cable - v2 - Black

very pricey, but good quality
 
Last edited:

gsrcrxsi

Member
Dec 12, 2018
53
3
8
i have built same systems for gpugrid.net and FAH.

i am using x16 -> x16 pcie risers of Li-heat.

what kind of risers are you using? because your SM-board has x8-slots.

this is my very first build

Imgur

4 gpu's, asrock x399 threadripper taichi. using 1 psu of 1200 watt (regular atx psu). and Li-heat x16 pcie risers
I am using these risers:
EZDIY-FAB New PCI Express PCIe3.0 16x Flexible Cable Card Extension Port Adapter High Speed Riser Card (20cm 180 Degree)-Upgrade Version https://www.amazon.com/dp/B07K9SRKCT/ref=cm_sw_r_cp_api_i_hPivDbZJT9NQZ

But I had to exchange 2 or 3 of them due to defects causing instability. They work fine when you get ones that work.

I’m aware that my board has only 8x slot size. I use an additional 16x to 8x adapter. It was cheaper to do this than to buy shielded 16x to 8x risers. They were very expensive at about $85 each. $25 for a standard 16x riser with a $2 size adapter was obviously cheaper.

 

gsrcrxsi

Member
Dec 12, 2018
53
3
8
Yes I saw your pics. Do you see mine? You can see that the cards are secured to the frame above the motherboard. It’s basically the open rack type setup that you have. The risers come straight down.

I got the adapters on eBay.
 

ari2asem

Active Member
Dec 26, 2018
566
94
28
The Netherlands, Groningen
so far i can see your cards are not supported at their backside. their only support point is the front-side (video output side), some with screws, some with tire wraps

with support of cards i mean...cards laying on something, like metal bar
 

gsrcrxsi

Member
Dec 12, 2018
53
3
8
so far i can see your cards are not supported at their backside. their only support point is the front-side (video output side), some with screws, some with tire wraps
I don’t know of anyone that would refer to the I/O side of a GPU as the “front”. That’s the back in my opinion since in a normal tower that would be the back side of the case.

But now I understand your question is really about the front of the GPU. And yes it is supported. There is a bar that runs underneath that the front of the GPU rests on.

Can see the bar here:


Basically the same as what you are doing in your pics.
 

ari2asem

Active Member
Dec 26, 2018
566
94
28
The Netherlands, Groningen
now i see it . thanks for picture

how about the noise of 2 hp server psu's?

can you put your rig in the room next your sleeping room and sleep well? or you can hear it from the next room?
 

gsrcrxsi

Member
Dec 12, 2018
53
3
8
No problem at all. They are very quiet. The 5x 2000RPM Noctua iPPC fans on the front are louder than the PSUs.

These are HP 1200w PSUs. But you only get 1200W with >208V input and 80plus platinum rated, which I am using 240V. So with 5x RTX 2070 on each it’s not really stressing them each one is only pushing about 700-800W.

If you run them at 120V, they get downrated to only 900W, and then the fans might run faster.

But now it’s pretty quiet. But noise isn’t really a concern for me. I have other servers that are much louder in the same room.
 
  • Like
Reactions: ari2asem

farid

New Member
Mar 20, 2020
5
1
1
I am planning to put ~8 gtx1080ti on the X9DRX. Do I really need to add extra power to pcie? I'm thinking of using the mining style pcie adapter.

If I use the pcie style adapter, will I notice the difference in speed? I'm also into the F@H kind of project.
 

ari2asem

Active Member
Dec 26, 2018
566
94
28
The Netherlands, Groningen
I am planning to put ~8 gtx1080ti on the X9DRX. Do I really need to add extra power to pcie?
yes, you need extra power for so many gpu.


I'm thinking of using the mining style pcie adapter.

If I use the pcie style adapter, will I notice the difference in speed? I'm also into the F@H kind of project.
for F@H you need at least pci-e x3 4 lanes. when using pci-e adapters miner style you are bottlenecking the gpu calculation. because mining adapters are 1 lane.
 

farid

New Member
Mar 20, 2020
5
1
1
yes, you need extra power for so many gpu.


for F@H you need at least pci-e x3 4 lanes. when using pci-e adapters miner style you are bottlenecking the gpu calculation. because mining adapters are 1 lane.
Can you provide link(s) for the adapters?
 

ari2asem

Active Member
Dec 26, 2018
566
94
28
The Netherlands, Groningen
i have this, but not in use. because i dont need it, my all mainbaords have extra pci-e power connectors.

https://www.amazon.com/EVGA-Power-Booster-Black-100-MB-PB01-BR/dp/B005OTXUYU

EVGA Power Boost

my epyc build has 5* rtx 2080ti cards, without extra pci-e power connector on supermicro-mainboard. i am using this build for F@H. gpu cards are undervolted for 50% with msi afterburner. so, per gpu card i use about 125-135 watt and without extra pci-e power connector.

i would say try this method:

- build your rig as usual, without extra pci-e power
- underclock/undervolt all your gpu cards with msi afterburner for 50%
- play a while with msi afterburner (adjust fan speed, set memory speeds to -500, minus 500)
- run F@H for couple of hours

is your system unstable? do you get spontanious reboots?

then use extra pcie-power.

but don't use this kind of pci-e power

PCI-E 1x to 16x powered Riser Card Mining / Rendering Kit Pro - SATA/USB3.0 - 60cm

this is very huge bottleneck for F@H because of 1 lane pci-e connection to your mainboard
 

farid

New Member
Mar 20, 2020
5
1
1
Ah, if undervolted, then I dont think extra power is needed because the 8-pin provises 150watt each. But in my case, I need full power, so I wont be going that route.

Anyway thanks for the tips. I'll try the EVGA power boost. I'll look more at the options that I have, but I'm planning to get the X9DRX, so I'll need some sort of a pcie extender/riser/adapter. Probably I'll go with what the gsrcsxsi have tried.