Stuck and in need of help - here's my use case

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 3, 2020
11
0
1
Hello community,

First of all, mods, if this is the wrong section, please move my post to the appropriate one - it seemed like the best place to post this.

So, I'm stuck and don't know what to do.

I'm between AM4, TR4, or modifying a server to suit my needs.

I'm a programmer who works in InfoSec, and in my off hours I tinker with race car simulations and autonomous cars.

I like sim racing, not competitively, but to test out my creations and have fun.

I'm looking to spec a build for running my R&D VM's for Assetto Corsa and rF2, as well as the games themselves.

I need PCI-e lanes, and the Ryzen series CPU's offer 20 or 24 iirc, which means I can run in theory 3 GPU's at x8/x8/x8. Seeing as the cards would only get half the bandwidth, Ryzen isn't really much of an option.

Now Threadripper, looking at the 1st gen stuff, offers a good balance of cores, speed, and enough PCI-e lanes to give full bandwidth for all 3 GPU's. It is a 3 year old CPU and I know that the latest stuff beats it but, PCI-e lanes.

Now, I'm not running 3 GPU's. I have a Tesla K80 that was gifted to me by a friend sitting on the shelf, and I really want to incorporate it into my home lab for my ML needs. The second card would be something along the lines of a 10 or 20 series NVIDIA card for the AC/rF2 use case, and for the 3rd slot, an IB card so i can connect my rig to my IB network.

I've found a C4130 with 2 E5 2620 v3's going for 750 GBP, I don't know what kind of performance I'd get out of it considering the games I want to run on it.

I've tallied up a TR4 build for about the same money.

Do I lock myself into a TR4 build knowing my upgrade path ends at a 2nd gen threadripper?
Do I buy a C4130 with 2620 V3's and work my way from there?
What are my alternative options here?

As far as modifying a server, yes, I know they were not meant to be gamed on, I'm not looking to turn it into a gaming rig. Gaming is used as an R&D platform in my case, and I do have the option of running headless simulations, but I'd love to be able to load up a session to just drive too.

What do? Discussion and opinions are greatly appreciated, and if any of the info I've provided are not enough, please engage me.

Thank you!
 

hmw

Active Member
Apr 29, 2019
570
226
43
If you will be running ESXi then EPYC is a good choice since it is supported. Threadripper and Ryzen are NOT officially supported by VMware.

If you're running ProxMox/KVM/QEMU then you can go with either Threadripper or Ryzen. Or Intel E5 v3.

The Ryzen/Threadripper/Intel consumer CPUs are often much faster than the equivalent EPYC/Xeon counterparts since they can turbo to far higher clockspeeds. Where EPYC shines is its support for 8 channel RAM and PCIe. I've seen SR-IOV fail miserably on Ryzen or Threadripper because the motherboard vendors don't have ARI support (and they shouldn't expect to support it for consumer motherboards). And if you will be using PCIe pass-through for any of your GPUs & network cards, it's far better to go with known working components than experiment. Again, I've seen passthrough simply not work with certain Intel/Supermicro or Ryzen CPUs and motherboards.

I have a GTX1080 on my EPYC homelab server and run a Windows 10 VM so that I can game on it via Moonlight - nothing fancy, just Crysis and BlackMesa. It works well and gives me ~ 90% of the expected frame rate. I also run Linux & Windows VMs as well for ML stuff. My build has Mellanox NICs so SR-IOV & passthrough working without problems were a priority

My suggestion is to look at people's builds on the forum and also the deals/wtb/wts section to get an idea of what you can get for your money. For me, thanks to some forum members, I was able to snag a EPYC 7302P CPU for less than the ThreadRipper/Ryzen equivalent. And I was able to get the Tyan EPYC motherboard for cheap. However, EPYC has a high idle power draw ~ 100W, so might not be the best thing for rip-off UK electricity prices (or California prices for that matter)

Going with 2nd hand dell servers is a good idea since you can check the Dell and STH forums for issues related to virtualization etc. As for the performance - most games are GPU bound and not CPU bound. But unless you're playing modern games designed to take advantage of multi-core, you will see a performance hit since your server CPUs are slower and your server RAM is slower with additional latency due to ECC etc. But the performance hit is something like 5% - 10%, that's a good trade off for having AVX-512 support or then the ability to put 256GB of RAM etc. Not to mention the larger cache sizes for server CPUs (larger cache size actually gives you a bigger boost than more mem bandwidth, in many cases)

You haven't said anything about cooling, rack size, power etc - is this something you're not taking into consideration at all?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I don't see the need for Epyc?

So you want 40 pcie lanes (2x16 +8), thats something an E5 can do just fine on a single CPU board. Get a 1630 or 50 v4, whatever you need (more clock or more cores); if you run either games or ML that should be sufficient.
I mean o/c you can go AMD, but there is no need to do so. Intel has plenty of (used) options that cover your need.

Or did I miss something in your requirements that require AMD?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
I don't see the need for Epyc?

So you want 40 pcie lanes (2x16 +8), thats something an E5 can do just fine on a single CPU board. Get a 1630 or 50 v4, whatever you need (more clock or more cores); if you run either games or ML that should be sufficient.
I mean o/c you can go AMD, but there is no need to do so. Intel has plenty of (used) options that cover your need.

Or did I miss something in your requirements that require AMD?
+1. Hell, I'd even go for v2 Ivy Bridge. Dead cheap. An e5 2667 v2 (8c/16T) paired with an X9SRL-F...
 

hmw

Active Member
Apr 29, 2019
570
226
43
Was about to suggest Supermicro X11SCL-F and a Xeon E2200 - but that’s 2 * PCIe x8 and 1 * PCIe x16.

Still - something worth considering, given that it has iKVM and is a server motherboard
 
Jun 3, 2020
11
0
1
If you will be running ESXi then EPYC is a good choice since it is supported. Threadripper and Ryzen are NOT officially supported by VMware.

If you're running ProxMox/KVM/QEMU then you can go with either Threadripper or Ryzen. Or Intel E5 v3.
At the moment I'm running Proxmox, although I have been contemplating reconfiguring my lab to go the Openstack route, seems fascinating and I'd love to make it work.

The Ryzen/Threadripper/Intel consumer CPUs are often much faster than the equivalent EPYC/Xeon counterparts since they can turbo to far higher clockspeeds. Where EPYC shines is its support for 8 channel RAM and PCIe. I've seen SR-IOV fail miserably on Ryzen or Threadripper because the motherboard vendors don't have ARI support (and they shouldn't expect to support it for consumer motherboards). And if you will be using PCIe pass-through for any of your GPUs & network cards, it's far better to go with known working components than experiment. Again, I've seen passthrough simply not work with certain Intel/Supermicro or Ryzen CPUs and motherboards.
I am looking to pass through the K80 - I know NVIDIA doesn't like virtualized machines using their GPU's but there is a workaround so that's not much of a concern. With my main criteria being PCI-e lanes, RAM also helps - would be nice to have the option to put more RAM in it without facing issues.


I have a GTX1080 on my EPYC homelab server and run a Windows 10 VM so that I can game on it via Moonlight - nothing fancy, just Crysis and BlackMesa. It works well and gives me ~ 90% of the expected frame rate. I also run Linux & Windows VMs as well for ML stuff. My build has Mellanox NICs so SR-IOV & passthrough working without problems were a priority.
That's pretty awesome, what hypervisor do you run? I know free VMWare/Citrix doesn't allow for GPU passthrough, iirc it's a paid feature. Part of the reason I went with Proxmox in the first place.


However, EPYC has a high idle power draw ~ 100W
****ing hell:eek: Then again, I don't let my machines idle a lot, I turn them on for work then power them back down again. They spend most of their operational life crunching numbers.

Going with 2nd hand dell servers is a good idea since you can check the Dell and STH forums for issues related to virtualization etc. As for the performance - most games are GPU bound and not CPU bound. But unless you're playing modern games designed to take advantage of multi-core, you will see a performance hit since your server CPUs are slower and your server RAM is slower with additional latency due to ECC etc. But the performance hit is something like 5% - 10%, that's a good trade off for having AVX-512 support or then the ability to put 256GB of RAM etc. Not to mention the larger cache sizes for server CPUs (larger cache size actually gives you a bigger boost than more mem bandwidth, in many cases)
That's what I thought as well, community support plays a big role in my choice of hardware too. No, it's AC and rF2, the occasional CS:GO, other than that I'm not a big gamer to be honest.

You haven't said anything about cooling, rack size, power etc - is this something you're not taking into consideration at all?
It's a 22U rack, which has cutouts for four cooling fans on the top. Waiting for the fans to arrive so I can bolt them in. Inside the rack, I have a C6100, a IS5030, and a 3COM gigabit switch. With the exception of the switches, I am filling the rack bottom to top, left a 2U space on the bottom for cold air to go in, planning on making some blank panels out of some material to slap on the empty U's to stop cold and hot air mixing, and removing them as I add more and more gear in the thing. This is more or less my cooling arrangement. Power wise, I have them plugged into wall outlets, I don't run them 24/7, I am planning on getting a PDU and a UPS when the budget allows.

And I thought cars were money pits... xD
 
Jun 3, 2020
11
0
1
I don't see the need for Epyc?

So you want 40 pcie lanes (2x16 +8), thats something an E5 can do just fine on a single CPU board. Get a 1630 or 50 v4, whatever you need (more clock or more cores); if you run either games or ML that should be sufficient.
I mean o/c you can go AMD, but there is no need to do so. Intel has plenty of (used) options that cover your need.

Or did I miss something in your requirements that require AMD?
For a consumer board, I'd go AMD, the offerings are better on that side. I'm not a fanboy of either company, neither of them signs my paychecks so I don't have a reason to side with either side. I'll have a look at those 1630/50's you're suggesting. Lad who is selling the C4130 will only shave off 20 quid for the box w/o CPU's, what's your take on the 2620 v3's? Never played around with those before.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
selling the C4130 will only shave off 20 quid for the box w/o CPU's, what's your take on the 2620 v3's? Never played around with those before.
Entry level server cpus... not usable for playing games.

I also run my gameing fully on ESXi boxes (one 1650v4, one 6246, with Quadro GPUs and ZeroClients via VMWare Horizon), works fine.

Just a matter of prioritization (cash vs heat vs space vs noise) ;)