HEDT platform advice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mikeone

New Member
Mar 17, 2024
10
0
1
Hi All,

I am currently using AM4 platform on Proxmox hypervisor with the bellow config:

ASUS X570 Creator WIFI
AMD Ryzen 5950X
Cooler Thermal right Assassin RGB
4X32Go DDR4 128G 3600Mhz
GPU RTX 3090 Founder Edition X2
SSD 990 Pro 4To X3
1000W FSP Fortron Hydro TI PRO ATX 3.0 80+TITANIUM
Case Fractal design torrent compact.

Unfortunately I am bandwidth limited and gpu performance is dropping when using my 2 gaming VM.

My goal is to build a Hybrid Gaming-Workstation with the bellow use case.

2X Gaming Windows 11 Pro VM's with 2 RTX 3090 passthrough (4K Gaming)
1X Truenas or Unraid VM (haven't decided yet) with HBA and 10gb Nic passthrough
1X MacOS VM with GPU passthrough
1X Frigate VM with gpu and coral passthrough
1X Home assistant VM with usb passthrough
1X Docker VM with couple of self hosted services

I am leaning toward a TRX50 + 7960X for the moment but I still did not find the perfect motherboard for me.
The ideal board should run :
Must have :
- 3 GPU at full bandwidth (2 X RTX 3090 + 5600 XT)
- onboard 10gb + support for additional 10 SFP+ NIC
- 1 HBA CARD
- 3 Nvme SSD
Nice to have :
- IPMI Support
- TB4 or USB4 with ALT DP

And finally the case, this is a big one as I am intending to rack mount the workstation so my ideal case would house everything + a 360 rad for the CPU cooling + min 8 hot swapable 3.5 HDD Storage.

Don't hesitate to give your insights about this setup and or propose better suitable hardware.

Thank you all
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
I would probably split the gaming off to another board. There are some cases that have room inside for an ATX and itx board.

The hot swap thing might be its own case by itself but there are some that can do that iirc there's one that holds 2.5" drives.

For TB gigabyte makes a card and routinely sells on Amazon used for $60.

Just some ideas.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
2x 3090 = 6pcie slots taken (unless you have the rare dual slot models or a possibility to mount the gpus via risers)
 

mattventura

Active Member
Nov 9, 2022
448
217
43
Some things to consider:

How are you planning to pass through 3 GPUs to 4 VMs? Would a vGPU type solution work better for your use case? (at least for anything less graphically intensive).

Same question for NICs - do you absolutely need separate NICs, or would SR-IOV work better?

Realistically, there just aren't enough PCIe slots in a typical case to hold all of that. You'll either have to figure out something else for the GPUs, or condense all the other stuff. Or, just go with two machines.

Also, I'd say go for TrueNAS scale over unraid.
 

mikeone

New Member
Mar 17, 2024
10
0
1
Thank you all for your replies,

@Tech Junky for the moment I'd like to have only 1 Workstation to house everything on rack mountable sliger CX4712 it has all the needed features.

@i386 indeed I intend to use riser cables if I can't feet everything

@mattventura indeed I only use 3 GPUs the MacOS VM would be power on only occasionally so that is not an issue, for the NIC I am not sure would the performance be the same without a dedicated NIC for the NAS OS ?

P.S : I already run TrueNAS at the moment and although it's great on the storage side, their implementation for Kubernetes Pods are horrible, I want to host the .arr suite and I don't want to have permissions issues so I need to house the storage and the containers on the same OS, I might give unraid a go this time.
 

mikeone

New Member
Mar 17, 2024
10
0
1
Because the fps drop a lot and it’s become laggy on both machines. But when I stop one of the vm’s its run fine
 

mattventura

Active Member
Nov 9, 2022
448
217
43
Because the fps drop a lot and it’s become laggy on both machines. But when I stop one of the vm’s its run fine
Are you sure that's related to PCIe bandwidth? Unless you're running the GPUs via a switch chip, they probably both have x8 at all times, even when one VM is shut down. That sounds like more of a load issue.
 

mikeone

New Member
Mar 17, 2024
10
0
1
I am not sure about the bandwidth to be honest but I kind of suspect it since there is clearly not enough pci lanes, I am currently running 3 Gpu + 3 Nvme SSD. The gpu is probably chocking with only x4 that is not enough bandwidth in addition to the fact that some of these PCIEx lanes are connected to CPU through the chipset and probably sharing bandwidth with other devices.

P.S : Based on the motherboard manual everything is running at x4 including the 3 Gpu and the 3 Nvme drives
 
Last edited:

Tech Junky

Active Member
Oct 26, 2023
368
124
43
That's the big pinch point for power users even today on X670. Hopefully on X880 they'll add 4 more lanes to at least get to Intel's spec. It would be even better if they could bump it to x16. But, at least x8 Gen4/5 would be a great boost for anything but the primary use cases for most run of the mill consumers.

This is where picking the right board comes into play though. If you can get a locked x16+x4/x4 for GPU+ 2*M2 you should be able to pull off something a bit more intensive when it comes to data / GPU. The problem is not many boards will lock the bandwidth allocations and auto split them when adding additional cards. The only board I found for the $ was the ASRock PG Lightning and snagged one for $160 on Amazon.

I was tempted to just go balls out and bump to the next level until I saw the prices just for the CPU/MOBO hitting ~$2K. For 1/4 of that I went consumer with a 7900X. I had some plans but, then decided to add a GPU for transcoding files quicker and more efficiently than letting the CPU do it. Life could be better with unlocking the lane assignments and putting all 4 slots to x16 electrically and allow for more control. That's the tempting part of going up to TR. The downside though even though you get 7-8 slots is the spacing won't allow for the huge GPUs and leave slots open if that's the intent. You could get into mining rig territory / riser cables and just mount the GPUs in a rack on their own and recover the real estate to use the additional slots. It's likely not going to be pretty nor cheap.
 

mikeone

New Member
Mar 17, 2024
10
0
1
The downside though even though you get 7-8 slots is the spacing won't allow for the huge GPUs and leave slots open if that's the intent. You could get into mining rig territory / riser cables and just mount the GPUs in a rack on their own and recover the real estate to use the additional slots. It's likely not going to be pretty nor cheap.
Indeed so the idea now is to use only 2 Gpu + 1 hba card + Nic + a TB4 Card with a riser cable.
I had like 30 card on 3 different rigs for eth mining I don't mind running it while benching and testing but I Would prefer to have a well installed machine on my rack
 

mikeone

New Member
Mar 17, 2024
10
0
1
also found the ASUS USB4 pcie card, hopefully this will work with my corning Active cable to link to my thunderbolt dock
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
ASUS USB4 pcie card
They tend to be the odd man out when it comes to cards. Their TB card has some odd cable harness compared to the standard ones as they used a 19 pin cable where others use 5+usb+power. Hopefully the one you found has the ASM controller and not the Intel.
 
  • Like
Reactions: mikeone

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
I echo the sentiment that there is just not enough information yet to know if your bottleneck is PCIe bandwidth to the GPUs, or raw compute for the games, or NUMA/cache issues with the two CCDs in the 5950X, or even I/O wait with the other workloads collocated on the system.
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
I would split main gaming system and the rest of em. I doubt mac os X will be fast with passthrough and etc, so I'd ignore performance aspect there.
For me it will be: Xeon with lots of memory and PCIe slots for everything virtualized and personal gaming/workstation rig.

I'm running 5800x3d/4090 with 2600v3/128gb Ram and 3060ti passthrough for VMs. My board got integrated 10G and SAS, so I was able to place everything in compact case.