Bought an EPYC 7532...Now what?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Zack Hehmann

Member
Feb 6, 2016
72
5
8
I just bought an Epyc 7532 from eBay for $325 (hopefully it was a good deal?). I'm not sure exactly what I plan on doing with this, but I know I'll be running Docker/Kubernetes, VMs, and misc. Linux tasks. Haven't decided on the OS yet, ESXi, Linux bare metal, or Proxmox? Also considering making this a VFIO machine? This will free up my x570 3900x making it my daily driver again.

I'm needing help picking out a few components:
  1. MOBO
    1. h11ssl-nc about $310 from China on eBay (Has better port offerings than the h12ssl series? I think I'm ok with PCIE Gen 3 stuff? I don't have any PCIE Gen 4 things and the used enterprise gear I'll be sticking to will all be gen 3? Half the price of h12ssl boards.)
    2. h12ssl-ct board ~ $750 - 790 from Amazon/eBay (Full warranty)
  2. 4U cooler
    1. Supermicro from Amazon $49
    2. Same cooler? eBay China $45 Maybe cheaper in other listings?
    3. Open to other suggestions
  3. Torque screwdriver fir setting the proper torque on the socket/HSF
    1. SanLiang Amazon $34. Can use this for other things besides the EPYC CPU vs the one listed below.
    2. Misc TR/EPYC ones on eBay ~$17?
  4. RAM
    1. Not sure what would be better, faster speed e.g. 3200 or bigger size sticks 8 X 32GB or 8 X 64GB?
    2. This post looks like NEMIX is recommended?
      1. 2 X 32GB 3200 2RX4 for only $80 making 256GB only ~$320. Only 1 kit though :(
      2. $65 Hynix 1x 64GB 4DRx4 DDR4 PC4-2666V ECC LRDIMM Server Memory HMAA8GL7AMR4N-VK (LRDIMM, Hynix, and 4R not supported/recommended?
    3. Open to other suggestions
  5. Case Sticking to Fractal Design R4 Case I already have.
  6. PSU? I have a 500W one lying around, might buy one?
  7. NVME
    1. Samsung PM983 7.68TB U.2 SSD 2.5"MZQLB7T6HMLA-00007 MZ-QLB7T60 EDB5202 PCIe Gen3 $299
  8. HDD
    1. I have a few 6TB and 8TB HGST SAS2 drives laying around.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
1) h12 ssl would be my choice: pcie 4.0 makes it more "future proof"
2) depends on the chassis; 4u noctuas for axample don't fit in 4u sm rack mount chassis.
amazon vs china: for 4 dollar it's a no brainer to me -> amazon
3) get one of the orginal torque thingies from amd, I think they were like 20$ on ebay (didn't check in a while)
4) in general 3200MHz > bigger ram (for the io & cpu chiplets)
 

allish

New Member
Sep 8, 2023
3
0
1
The h12ssl-ct is a more expensive board, but it supports PCIE Gen 4 devices. Overall, I think you've put together a good system. The EPYC 7532 is a powerful processor that can handle anything you throw at it.
 

drdepasquale

Member
Dec 1, 2022
76
30
18
To answer your questions
1) I recommended the H12 SSL as a motherboard because of its inclusion of 10 gigabit Ethernet, PCIe 4.0, and upgrade paths to EPYC Milan.
2) I don't have any experience using either of those coolers in a workstation. I used the Be Quiet Dark Rock Pro TR4.
3) The Threadripper branded screwdrivers from eBay work perfectly
4) Only use DDR4 3200 with 2nd and 3rd generation EPYC processors, EPYC doesn't work well with LRDIMMs. DDR4 memory is plummeting in price so it is easy to upgrade later. 32 GB of faster more compatible memory is better than 64 GB of potentially incompatible memory.
6) Make sure your power supply has two CPU power connectors
 

metebalci

Member
Dec 27, 2022
51
7
8
Switzerland
I am using an H12SSL-NT with Milan (3rd gen) EPYC for a year or so at home, running Proxmox VE.

Having onboard 10G ports is both a good and a bad thing. If what you will run supports it well and you will always run 10G network, then it is a good thing. At the moment I have a 10G network, but I thought about going 40G (then didnt), and I guess it is not difficult to go 25G now. Moving from 10G to somewhere else will make these unnecessary. The disadvantage is, if they are not used, they generate heat and the cost of mainboard with 10G is higher (than non-10G models). I also run pfsense virtual here, and pfsense cannot use virtual adapters at 10G (max 5G or so), so I had to install another NIC. Then, I run tnas core as well, and it also works better with a not virtual NIC, so I am using another NIC. Theoretically, I could have installed only one 25G, 40G, 100G whatever NIC and used virtual adapters everywhere but it didnt work like that.

I was not planning to natively connect HDDs so I did not choose the mainboards with -C option.

I dont have any PCIe 4.0 cards yet, when I was buying the mainboard I also thought it should be future proof a bit but at the end I am only running PCIe 3.0 at the moment. I think it is a good thing but also depends.

I am using Noctua fans, the compatibility has to be checked. If you want it to run as silent as possible, try to find a spacious case and use large diameter fans. Supermicro default fan control is not very adjustable.

I installed 8x 16G as I thought 128 would be enough for me (not running anything memory intensive normally). If you ask me now, I think I would have installed 4x 32G or 4x 64G to be able to expand it later. The system is already so powerful for normal tasks, I dont think I need the extra RAM bandwidth. Upgrading all RAMs later is expensive. I dont know Rome but for Milan it should be 3200.

PSU depends on what you will run, 500W sounds not much to me. I prefer to run power supplies not close to full load. If you plan to install GPU or something like that, that should be more.

I am using 2x 2TB Samsung PM9A3 NVMe, I will add higher capacity SATA SSD if I need something larger. I am running a tnas core with lots of capacity for other needs.

Regarding the mainboard, my two complaints would be IPMI remote interface and the fan control. They both run OK, but for example the remote interface of Dell was much better, and I wish the fan control could a little bit more adjustable.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
Hello,

I ended up buying a h12ssl-c from Amazon. I only have 4 sticks of ram in and waiting on the others to be delivered by Amazon. I'm unable to get DIMM Channel D1 to recognize any sticks. I followed the memory slot suggestions for 2 and 4 sticks and I can't get D1 to ever work. Is it a faulty CPU, MOBO, or just the CPU retention clip not torqued down enough? Also the plastic CPU tray carrier thing it badly damaged, I will still able to get the CPU and tray installed w/o damaging any pins, but not sure if that has something to do with it?
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
1) It could be the cpu (something fried, "pins" bent),
2) it could be the mainboard (socket is "dirty", "pins" are bent, dimm slot damaged etc.),
3) it could be the heatsink assembly mounting (not all cpu "pins" pushed to the socket counterparts).

To rule the cpu & mainboard as the culprits out (I think) you would need a known working system where you could install and test the cpu (requires a know working system)

Does the system post?
Can you get into ipmi/bmc?
Can you boot into linux/windows?