Chassis recommendations for GPU, H13SSL-N, and drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

coolerdonk

New Member
Mar 7, 2024
13
1
3
Hi all,

This is my first server grade build. Here's what I'm packing:
- H13SSL-N
- AMD 9654
- NVIDIA A10 (Passive)
- 4x HDD 3.5" drives (SATA)
- 2x M.2 drives with heatsinks

Budget: roughly $100-$350 (looking in the used market mainly)

I have not yet decided on a server rack type enclosure, or a workstation/desktop enclosure. Here are some things I'm considering:
- Redundant PSU: This seems like a nice to have, but not a necessity especially with my budget.
- HDD Bays: Again, a nice to have. I think I would like to have the upgrade path to add at least 2 other 3.5" drives.
- SSD Bays: I definitely want an upgrade path to add U.2 SDDs later, at least 2.
- Noise: The server may have to spend some time in the corner of an open layout office, so it can't be deafening.
- Cooling: Has to adequately cool a 400W CPU and 150W GPU at least.

Note about major compute upgrade: I would like the option of doubling the compute in the near future when something like used 9754 becomes affordable. In the current setup this would have to involve a second MOBO as the one I picked is not dual socket (I have considered putting 1 CPU on a dual socket for the time being, but the used market on dual socket MOBOs is not great right now). That generally seems unaffordable in the near future...
Similarly, I would like the option of adding a second GPU to this instance.


I'm aware this is very much a "I want it all" set of specs, so I'm looking for someone to ground me in reality.
 

rtech

Active Member
Jun 2, 2021
304
108
43
I think major stumbling block is that passive A10 brief search does not reveal how much cooling it needs as your case needs to push air through passive heatsink

I am thinking you might be reduced to either tower with custom air tunnel or 2U with Turbojets
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
Dude! You said an open office... PLEASE get a workstation. I know so many people who want racked units because it looks cool. Unless you already have a rack and don't mind going with a 4U or larger container, there really is no sense in going the horizontal route.

Servers are housed in racks solely to increase density and to maximize the cooling capability per square foot. If you're going to store many servers this may make sense. For most, it doesn't.

You don't need redundant PSU's.
The only reason why servers have redundant PSU's is because a failure could cost the business a lot of money if the server goes down. Estimate the chances a PSU dies while in use AND the cost of having the PSU die. You only need redundancy here unless it's mission critical to have this thing alive even during a power outage

HDD and SSDs are easy to upgrade if you get a workstation
Rackmount servers are tight with space, expandability isn't usually an option. Make sure your chassis has spare 5.25" bays so you can add an icy dock expansion for whatever your storage requirements. 2 bays are usually enough.

With $350 budget I would buy a new tower chassis from your computer store from a reputable brand. This will cost around $150. Spend the rest on a 800W power supply and CPU cooler. These can be used
 

vegardx

New Member
Mar 7, 2024
4
2
3
You're going to have a hard time cooling that CPU and GPU quietly in a rack mounted chassis. Especially the GPU as they're meant to be installed in a hot/cold-aisle configuration, where actively cooled air is forced through them using quite beefy fans. Unless you're really pushing that CPU you're never going to see it hit those TDP numbers. So in a workstation configuration you should be able to quite easily and quietly cool it with something like a Noctua cooler.

You don't need redundant PSUs unless you have redundant power, like you typically have in a data center. For the most part people do redundant power distribution and PSUs in DCs so that you can do maintenance on the power distribution itself, and/or because you have independent upstream suppliers to different grid regions and such. There's a non-negligible overhead when using redundant PSUs in power consumption. Failure of PSUs are quite rare. Personally I'd spend the money on flash storage with powerloss protection, as regular consumer flash drives without it have a tendency to just go corrupt if they suddenly lose power.

As for using 5,25" bays to install hot-swappable cages, are you sure you need it? They're super expensive and most modern workstations have dropped those bays entirely, so you limit the available selection quite significantly. You have amazing workstation towers like the Phanteks Enthoo Pro 2 Server Edition that is vert spacious and easy to set up with proper cooling. It supports tons of disks, in all form factors, and motherboards in all sizes, from Mini ATX to SSI EEB. In terms of future-proofing you don't really get anything better than that, at that price point.
 

coolerdonk

New Member
Mar 7, 2024
13
1
3
Dude! You said an open office... PLEASE get a workstation. I know so many people who want racked units because it looks cool. Unless you already have a rack and don't mind going with a 4U or larger container, there really is no sense in going the horizontal route.

Servers are housed in racks solely to increase density and to maximize the cooling capability per square foot. If you're going to store many servers this may make sense. For most, it doesn't.

You don't need redundant PSU's.
The only reason why servers have redundant PSU's is because a failure could cost the business a lot of money if the server goes down. Estimate the chances a PSU dies while in use AND the cost of having the PSU die. You only need redundancy here unless it's mission critical to have this thing alive even during a power outage

HDD and SSDs are easy to upgrade if you get a workstation
Rackmount servers are tight with space, expandability isn't usually an option. Make sure your chassis has spare 5.25" bays so you can add an icy dock expansion for whatever your storage requirements. 2 bays are usually enough.

With $350 budget I would buy a new tower chassis from your computer store from a reputable brand. This will cost around $150. Spend the rest on a 800W power supply and CPU cooler. These can be used
Hadn't heard of that icy dock. Pretty cool! I thin you're right, I should forget about the redundant PSUs and rack mount. Any good cases you recommend with a focus on temps?
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
Your computer case is key. Make sure the fans on them are sufficient. A single rear facing fan will not be sufficient so make sure you choose the right case. If it a single fan, make sure it's a diesel fan... and one that can last 24/7.

The temps will not be an issue as long as you can keep the A10 properly cooled. The CPU will need an active cooler, get a 4U or larger active heatsink. The RAM, HDD's and other things will be cooled by the case fans and will not be a problem - the board allows the heat from the ram to be exhausted to the rear.

With a workstation's worth of space cooling will not be an issue. Trust me. Just pick a good NEW case. Not used.
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
Or.... just buy an old supermicro workstation. It doesn't need to be AMD and doesn't need to be from the last 5 years. The cases are solid and really cheap if bought used. Chuck out the mainboard and replace with the H13SSL-N. Supermicro used to be all about customization so older chassis/cases are super-compatible with any standard SM board. You'll get a server grade PSU, server-grade fans and the usual expandability that SM is known for. All for less than $100. It won't be as stylish as a new computer case but it will be more reliable in the long run.

Icy dock has some U.2 bays that you can stick in the 5.25" bays. Your mobo supports a butload of U.2 (U.3's actually). I have the H12SSL which only supports 2.
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
Also... drop the A10. Unless you've already purchased or can get it for super cheap. There is no world where a server grade A10 makes more sense than an RTX. If you really need 24GB ram... just go with the RTX A5000. You'll get enough ram for your datasets and you won't have to mess with the cooling issue (3D printing is a real hassle). It's the exact same chip - GA102 inside the A10.

You'll also save yourself a lot of money.
 

coolerdonk

New Member
Mar 7, 2024
13
1
3
Also... drop the A10. Unless you've already purchased or can get it for super cheap. There is no world where a server grade A10 makes more sense than an RTX. If you really need 24GB ram... just go with the RTX A5000. You'll get enough ram for your datasets and you won't have to mess with the cooling issue (3D printing is a real hassle). It's the exact same chip - GA102 inside the A10.

You'll also save yourself a lot of money.
Thanks for the recommendations. I'm curious why the A5000 over the A10, it seems to be like while they are the same chip the A10 is significantly faster, or am I mistaken?

I will take a look around Supermicro workstations as well, any recommendations welcome.
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
Why would you say the A10 is significantly faster? Can you show me some benchmarks?

The RTX 3080 uses the same chip as the RTX A5000 which uses the same chip as the A10.

The key here is performance (per watt) and noise level when you're dealing with labs.

In terms of gaming, the 3080 is better (more frame rate) in gaming but when it comes down to specific workloads, this will be different due to the ram bandwidth and other factors. Remember the 3080 uses MUCH more power than the other cards but it's the same GA102 chip inside. Hence, you need 3 big fans and a 16 pin power connector. The A5000 uses a single server grade fan (blow type) with an 8 pin power connector. The A10? Passive. No fan.

You're relying on passive cooling to cool a chip which could trottle if the heat goes to high. You're also in an office with no HVAC. How confident are you in creating a custom cooling mod for the A10 to make it perform "the way it was intended to".

Dunno... just my thoughts
 
  • Like
Reactions: coolerdonk

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
Supermicro 745 X11 (or newer) enabled chassis (up to 11k rpm fans in the midwall and 9.7k rpm fans at the rear) with the additional external gpu/pcie fan should be able to tame high tdp cpus & gpus. But this is far from quiet.
This chassis can be found occasionally on ebay for a "decent" price (not necessarily the 350$)
 

coolerdonk

New Member
Mar 7, 2024
13
1
3
Why would you say the A10 is significantly faster? Can you show me some benchmarks?

The RTX 3080 uses the same chip as the RTX A5000 which uses the same chip as the A10.

The key here is performance (per watt) and noise level when you're dealing with labs.

In terms of gaming, the 3080 is better (more frame rate) in gaming but when it comes down to specific workloads, this will be different due to the ram bandwidth and other factors. Remember the 3080 uses MUCH more power than the other cards but it's the same GA102 chip inside. Hence, you need 3 big fans and a 16 pin power connector. The A5000 uses a single server grade fan (blow type) with an 8 pin power connector. The A10? Passive. No fan.

You're relying on passive cooling to cool a chip which could trottle if the heat goes to high. You're also in an office with no HVAC. How confident are you in creating a custom cooling mod for the A10 to make it perform "the way it was intended to".

Dunno... just my thoughts
Thank you this is well noted!