ASRock Rack ROMED8-2T vs Supermicro H12SSL

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

j.battermann

Member
Aug 22, 2016
82
16
8
44
Good afternoon,

I am planning on putting together a new, low core EPYC system (I need the PCIe lanes more than the cores) and I am currently contemplating whether I should go with another ASRock Rack board, the ROMED8-2T, or one of the Supermicro H12SSL-* ones (probably the -I as I have a spare Intel X710 card, however maybe going to go with -NT to get the second SlimSAS port).

I have two other ASRock Boards (a ROMED6U-2L2T and a E3C246D4U2-2T one) and I am generally fine with them, but I've had some problems with the Bios and BMC in the past and the E3C246D4U2-2T one is a bit flakey sometimes... hence my slight tendency to give the Supermicro one a try.

Does anyone have any experience with either of the two boards, especially when using them with a Milan CPU?


Thanks!
-Joerg
 

juma

Member
Apr 14, 2021
64
34
18
Will be getting a 7443P to replace a 7551P in my ROMED8-2T sometime next week. Will let you know how it goes.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
I have both and use for slightly different purposes. You can reach out for more detailed advice via pm.

romed8 has 7 x16 slots (at least you can configure via a jumper setting). the sm has 5 x16 and 2 x8 slots. this may not mean that much in most cases but the romed8 has the advantage. the romed8 has onboard 10G. I would actually prefer not to as it generates more heat. this may not be a negative for most and maybe a positive as it does provide 2 10GE onboard ports. you can get the sm with 10GE as well if you chose the correct model. occulink for the romed8, slimsas for the sm.

in order of quality from a manufacturer perspective: Dell/HPE, SM and then asrock rack. Things like bios/fw releases, support, etc.

I do like the fact that asrock rack does come out with pretty odd boards that work for some of my use cases.

Both boards provide a lot of expansion (pci) in a relatively small form factor (atx). This is partially due to the amd epyc's 128 pci gen4 lanes even with a single cpu.

I started to work with some of the intel ice lake boards and you really learn to appreciate the 128 lanes that the epycs provide vs the 64 lanes that the ice lake cpu offers.

both the romed8 and sm h12ssl boards have their share of issues with regards to bios/fw/mgmt. There are a few threads discussing some of these issues in detail.

Ping me if you want and we can do a deep dive with your specific use case.
jp
 

j.battermann

Member
Aug 22, 2016
82
16
8
44
I have both and use for slightly different purposes. You can reach out for more detailed advice via pm.

...

Ping me if you want and we can do a deep dive with your specific use case.
jp
Thanks @jpmomo .. very much appreciated! I've decided to go with the ROMED8-2T, especially for the 7 PCIe slots. Can you elaborate re: that jumper setting? I didn't know there is a jumper I can / have to set in order to utilize them all.. will I be losing something when I set it (I assume one of the other functionalities/ports, right?)?


Thanks!
-Joerg
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,322
1,605
113
I didn't know there is a jumper I can / have to set in order to utilize them all.. will I be losing something when I set it (I assume one of the other functionalities/ports, right?)?
if you want PCIE2 at x16 you disable - M2_1/SATA_4_7/OCU1/ OCU2
check the manual(pdf), there is a schematic.
 
  • Like
Reactions: j.battermann

gsrcrxsi

Active Member
Dec 12, 2018
302
102
43
I also vote for the ROMED8 over SM. The extra full size slots could be helpful. And at the very least give flexibility on which slots you use.

the rome board also provides dedicated PCI power input that the SM doesn’t have. Good for multi-GPU.

i have no complaints about the bios/firmware, but I also don’t get too into the weeds and I just use these things for crunchers.
 
  • Like
Reactions: j.battermann

juma

Member
Apr 14, 2021
64
34
18
Thanks, much appreciated!
Sorry for the late reply, there was a two-week shipping delay so I finally was able to get things installed today.

I used an HPE 7443P and it ended up being a drop-in replacement (or so I found out after 8 hours of tinkering) with the 3.20 BIOS.

If you use Proxmox, I can elaborate on some of the issues I ran into (and misdiagnosed).
 
  • Like
Reactions: [Nobody]

juma

Member
Apr 14, 2021
64
34
18
Please tell us from your issues
The biggest problem was that installing the new CPU reset all of the PCIe addreses, so I lost all network connectivity and GPU passthrough. I had to physically connect to the server console and change the network configuration to use the new address, then go into all the VM configurations and change the GPU addresses.

Also, upgrading the BIOS seemed to break the KVM functionality of the IPMI, even with making sure that BIOS settings were saved during the upgrade. So you could imagine I thought the new CPU wasn't posting. I ended up taking out my 4 GPU watercooled loop to use the VGA display out on the motherboard to verify everything.