How? 48 RAM slots, 750TB-1PB Storage, all in one place, no NAS.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fatherboard

Member
Jun 15, 2025
157
10
18
Use case: Massive HPC, 48 RAM slots RAM, one final result is stored in 750TB-1PB. Once done with the result, erase all start all over again with another HPC.
Bottleneck to avoid: the weak NAS connection 1, 10, 100, and even 400 Gbps (compared to internal transfer speeds of RAID 0 of U.2 / M.2 / SSDs / HDDs...)
All under one roof, no NAS.

Preliminary failed search results:
Silverstone RM61-312: nice combi, but not big enough for 48 RAM slot motherboards that come in a proprietary form factor. Only support up to E-ATX or SSI-EEB motherboards.
ASRock 4U8G-TURIN2: only 24 RAM slots is definitely a no go, only 4 NVMe drive bays.

Do you know a "home" for this many guests, no NAS?
 

i386

Well-Known Member
Mar 18, 2016
4,804
1,864
113
36
Germany
What are the hard requirements?
Because a lot of the stuff you have posted in different threads looks unrealistic to me (most definitely for a diy system)
 

fatherboard

Member
Jun 15, 2025
157
10
18
Because a lot of the stuff you have posted in different threads looks unrealistic to me (most definitely for a diy system)
It's a sign I'm on the right track, thank you. Because so far, for the kind of projects I plan to do with the hardware, people have long been stuck with the standard offer: ugly claustrophobic cases, dog slow NAS...
I start from the demand, the need, my demand, my need, because I happen to be the one who pays and decides, and the money is hard earned. So I would be happy to give it in exchange for what I exactly need.

If what I need is not out there, fine. let's be creative, if it works someone else may want to do it as well.
 

kapone

Well-Known Member
May 23, 2015
1,751
1,135
113
If an existing case doesn’t work, it’s trivial to have a company design a “bespoke” case for you. It’s simply a cost/benefit analysis.
 

kapone

Well-Known Member
May 23, 2015
1,751
1,135
113
It's a sign I'm on the right track, thank you. Because so far, for the kind of projects I plan to do with the hardware, people have long been stuck with the standard offer: ugly claustrophobic cases, dog slow NAS...
I start from the demand, the need, my demand, my need, because I happen to be the one who pays and decides, and the money is hard earned. So I would be happy to give it in exchange for what I exactly need.

If what I need is not out there, fine. let's be creative, if it works someone else may want to do it as well.
This comes across as fairly arrogant, given the company you’re sitting with. I run ~a petabyte of replicated storage for my business, but there’s no need or use case for me to chase uber performance. I just need lots of storage and I need it to be redundant and highly available.

Takes about 8U of space in the rack and a godawful amount of power to run it. Put it this way. The storage backend is on its own 20a circuit…
 

bonox

Active Member
Feb 23, 2021
132
49
28
Bottleneck 400 Gbps (compared to internal transfer speeds of RAID 0 of U.2 / M.2 / SSDs / HDDs...)
One does wonder what the use case for a petabyte of RAID 0 is.... and why 50GB/s is insufficient for a single user who is apparently fine with that reliability profile.

The simplest answer here using COTS equipment instead of custom builds is a case for the MB and enough DAS modules to fill your boots. One case is probably silly - you won't get a backplane, the cabling will be a headache and you certainly won't be able to lift it. Server box plus a 45 drives storage unit with 22/24TB drives and you're there. Two boxes if you want some kind of redundancy.

I'm not really sure what you're angling towards with "One Place" vs "No NAS". You can have piles of boxes connected together with SAS/PCIe cables for local storage without having to resort to ethernet to connect boxes together.

On the NAS concept though, is NVMe over Fabric too slow?
 
Last edited:

EasyRhino

Well-Known Member
Aug 6, 2019
667
561
93
did you make any progress on your search?

48 DIMM motherboards are pretty rare. Patrick previewed a few...
ASRock Rack TURIN2D48G 48 DIMM Motherboard Launched - ServeTheHome
Gigabyte has a 48 DIMM 2P AMD EPYC Genoa GPU Server at SC22

... but they are proprietary form factor and geared towards server chassis that don't include a boatload of storage.

Speaking of your boatload of storage, are you talking 2.5" or 3.5" if 3.5"... nobody builds those into their chassis anymore. You probably would need to buy a big JBOD unit and then attack it to your server. (which could be external SAS cables, that would be fine).

Plus wanting to use AIO cooler means you'd need a 3U or more likely 4u case. But the space for the fans would probably be taken up by the drive bays you want.

something like this gets you close but it's single CPU:
S453-Z30-AAV2 (Rev. 3.x) | Storage-Server - GIGABYTE

if you wanted all flash maybe something like this:
SSG-222B-NE3X24R | 2U | SuperServer | Products | Supermicro