Time for upgrade upgrade - Lga 2011, 3647 or Epyc,?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
I’m considering upgrading my home/lab server but I haven’t been been paying much attention to the used enterprise gear market lately so I’m not sure what is cost (and power) effective anymore.
I run ESXI 6.5 with 6-8 VMs (windows server, TrueNAS, Ubuntu Server, etc) + whatever I’m messing with at the time.
My current system is 4-5 years old and does everything I need but Id like to try some new stuff with more cores, memory and get a GPU in there.

Current system:
Supermicro X10srh-cf, 8-core Xeon and 64 GB RAM + several HBAs passed through and a 10gb NIC.

Just looking on eBay it looks like if I stick with the same mobo I can upgrade to a 16 or 18 core cpu and double the memory for maybe $500 but I was also looking at some of the newer platforms, specifically AMD Epyc and LGA 3647 and there seem to be some pretty good deals there. I already have a nice quiet 4u chassis that I intend to keep.

Changing platforms would also give me the chance to move to a different hypervisor and since VMware has dropped support for most of my hardware I’ve been wanting to try out XCP-ng.

Any tips? Thanks is advance
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
Unless you need more memory and out of slots, I'd stick with existing one: it's modern (DDR4 and etc) and pretty fast for home lab. 16-20 core Xeon will do it just fine IMHO
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
I think EPYC is great personally. Have a bunch, but some Ice lake xeons too. First gen epyc is actually still pretty good for small domain virtualisation and can be dirt cheap. Not so great if you want big VMs. I'm about to flip a 16 core EPYC I grabbed for temporary testing if you want :p
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Thanks. I still have 4 empty DIMMs. I think the main advantage to upgrading would be PCIe lanes and maybe power efficiency. If I stick with the same mobo and get an E5-2697v4 (which is less than $200 on eBay) and add a GPU, I’ll be short one PCIe lane for an HBA.
  • 1 PCI-E 3.0 x8 (in x16 slot) - GPU
  • 2 PCI-E 3.0 x8, - one used for nvme, one blocked by GPU

  • 1 PCI-E 3.0 x4 (in x8 slot) - 10gb NIC
  • 1 PCI-E 2.0 x2 (in x4 slot) - HBA
  • 1 PCI-E 2.0 x4 (in x8 slot) - HBA
 

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
Thanks. I still have 4 empty DIMMs. I think the main advantage to upgrading would be PCIe lanes and maybe power efficiency. If I stick with the same mobo and get an E5-2697v4 (which is less than $200 on eBay) and add a GPU, I’ll be short one PCIe lane for an HBA.
  • 1 PCI-E 3.0 x8 (in x16 slot) - GPU
  • 2 PCI-E 3.0 x8, - one used for nvme, one blocked by GPU

  • 1 PCI-E 3.0 x4 (in x8 slot) - 10gb NIC
  • 1 PCI-E 2.0 x2 (in x4 slot) - HBA
  • 1 PCI-E 2.0 x4 (in x8 slot) - HBA
Can you use a single slot GPU?
If it's for transcoding, something like an Nvidia P400 works perfectly

You could also use a Synology E10M20-T1 card that combines a 10G NIC with an nvme drive :)
 

Stephan

Well-Known Member
Apr 21, 2017
945
714
93
Germany
Or wait for hyperscalers to retire 2nd gen Xeon Scalable Platinums in 2-3 years in larger quantities... But by then 3647 boards will be even scarcer than today.
 

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
I think EPYC is great personally. Have a bunch, but some Ice lake xeons too. First gen epyc is actually still pretty good for small domain virtualisation and can be dirt cheap. Not so great if you want big VMs. I'm about to flip a 16 core EPYC I grabbed for temporary testing if you want :p
Did you run into many/any issue with NUMA nodes?
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
"issues"... eh, not any show stopping errors but...
I can't seem to get full performance out of any generation for my high bandwidth and latency sensitive codes (CFD etc). Presumably the NUMA and virtualization layering is causing a penalty, but I find any modest core count VMs that are not running latency sensitive work to be perfectly fine. I have done gaming with GPU passthrough etc and it works great too.
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
If you run out of PCIe slots for HBA, you can always try to passthrough the onboard SATA (in place of HBA). If that doesn't work, you can try use RDM. If you don't like RDM, you can look into PCIe extension cable so that you can move the GPU out of the way.
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Well…. I went ahead and upgraded to an E5-2697v4. 18 cores for $175! I also doubled the memory. New CPU and memory all showed up and it booted up no problem (also blew out about 2 lbs of dust). But now all my data stores are missing. Could the CPU change have messed up which HBAs are passed through?

I have a 3008 on the chipset passed through.
An LSI 2008 not passed through
An HP H220 (which is a 2008) that is passed through

I can see all of the HBAs in esxi but none of the drives. It’s the strangest thing. power to the drives is good. I don’t get it.

My motherboard is on BIOS v2.0 which is what’s required for v4 Xeons and I don’t like messing with things that aren’t broken But there is a version 3.4 availabl.
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
It was a bad drive. One bad SAS drive killed whatever HBA it was connected to. Once I isolated and removed it everything else came right back up. I’m going to dig a hole in the backyard and bury it.

Its an older HGST 400GB SAS SSD. I have several of them and they are great drives with ridiculous durability but apparently they‘ll take down the whole controller when they go tits up.