Build’s Name: Operation: Entirely Sensible
Operating System/ Storage Platform: VMware vSphere 7
CPU: AMD EPYC 7573X or 7473X (if we can even source one of these)
Motherboard: ASRock Rack ROMED6U-2L2T
Chassis: U-NAS NCS-810A
Drives: 5x WD Gold 18TB, 3x WD Gold 20TB, 1x Micron 1100 SATA SSD (boot/VM storage)
RAM: 4x Samsung 32GB DDR4 ECC
Add-in Cards: Nvidia ConnectX-5 VPI (configured for 100ge on both ports, running 25gbit)
Power Supply: Seasonic FX600 Platinum
Other Bits:
Usage Profile: TrueNAS based NAS running as a VM within ESXi. Storage controllers passed through to VM. VMs running within host running docker containers (likely transition to k8s as RAM permits) for Plex, various game servers, various DBs, Teslamate (for a friend of mine, I blow all my money on computer stuff... not cars)...
Other information…
This is actually an upgrade project. Likely a terrible terrible idea but I haven't gotten to a point where I've convinced myself yet. Primarily looking to use this post as a bit of a stream of consciousness while I work through whether this project is at all feasible. Been at this for about the last 10 hours and it's 5AM so pardon if I'm a bit punchy. The specs listed above are what I'd like to target as the end goal. The current build being upgraded is as follows (sorry for the double spoiler tags, the forum seems to have trouble with bolded text after a spoiler tag):
Current Goals:
Upgrade proc to AMD EPYC proc to increase core count, per core perf, remove NIC PCIe bottleneck, expand maximum RAM supported, and expand future IO options.
Bragging rights as the kid with the toaster oven sized 32 core AMD EPYC powered NAS/VM host (I live a sad pathetic life).
Future Goals:
Replace backplane boards with ones supporting U2/U3 (understanding that these tend to be built for the chassis they end up in and might be difficult to size or even source).
Replace drives with Kioxia CM6 30TB NVMe SSDs (or whatever the current hotness ends up being when I'm rich enough to source massive NVMe drives).
Issue #1: Cooling; Oh god, how are you going to cool that thing?
Issue #1 Details:
Issue #2: SP3/sTRX AIO Liquid Cooler; I can haz SP3 AIO Liquid Cooler? Spoiler alert...
Issue #2 Details:
Liquid Cooling Parts List:
Issue #3: Space is astoundingly tight; Where on earth do you plan on putting a radiator, reservoir, fans, and a pump?
Issue #3 Details:
Operating System/ Storage Platform: VMware vSphere 7
CPU: AMD EPYC 7573X or 7473X (if we can even source one of these)
Motherboard: ASRock Rack ROMED6U-2L2T
Chassis: U-NAS NCS-810A
Drives: 5x WD Gold 18TB, 3x WD Gold 20TB, 1x Micron 1100 SATA SSD (boot/VM storage)
RAM: 4x Samsung 32GB DDR4 ECC
Add-in Cards: Nvidia ConnectX-5 VPI (configured for 100ge on both ports, running 25gbit)
Power Supply: Seasonic FX600 Platinum
Other Bits:
Usage Profile: TrueNAS based NAS running as a VM within ESXi. Storage controllers passed through to VM. VMs running within host running docker containers (likely transition to k8s as RAM permits) for Plex, various game servers, various DBs, Teslamate (for a friend of mine, I blow all my money on computer stuff... not cars)...
Other information…
This is actually an upgrade project. Likely a terrible terrible idea but I haven't gotten to a point where I've convinced myself yet. Primarily looking to use this post as a bit of a stream of consciousness while I work through whether this project is at all feasible. Been at this for about the last 10 hours and it's 5AM so pardon if I'm a bit punchy. The specs listed above are what I'd like to target as the end goal. The current build being upgraded is as follows (sorry for the double spoiler tags, the forum seems to have trouble with bolded text after a spoiler tag):
Current Parts List:
Motherboard (and CPU): ASRock Rack D1541D4U-2T8R
Chassis: U-NAS NCS-810A
RAM: 4x Samsung 32GB DDR4 ECC
SSD (boot/VM storage): Micron 1100 SATA SSD
NIC: Nvidia ConnectX-5 VPI (configured for 100ge on both ports, running 25gbit)
PSU: Seasonic SS-350M1U
HDD 1-5: WD Gold 18TB
HDD 6-8: WD Gold 20TB
OS: VMware vSphere 7
NAS OS VM: TrueNAS 13
Chassis: U-NAS NCS-810A
RAM: 4x Samsung 32GB DDR4 ECC
SSD (boot/VM storage): Micron 1100 SATA SSD
NIC: Nvidia ConnectX-5 VPI (configured for 100ge on both ports, running 25gbit)
PSU: Seasonic SS-350M1U
HDD 1-5: WD Gold 18TB
HDD 6-8: WD Gold 20TB
OS: VMware vSphere 7
NAS OS VM: TrueNAS 13
Current Goals:
Upgrade proc to AMD EPYC proc to increase core count, per core perf, remove NIC PCIe bottleneck, expand maximum RAM supported, and expand future IO options.
Bragging rights as the kid with the toaster oven sized 32 core AMD EPYC powered NAS/VM host (I live a sad pathetic life).
Future Goals:
Replace backplane boards with ones supporting U2/U3 (understanding that these tend to be built for the chassis they end up in and might be difficult to size or even source).
Replace drives with Kioxia CM6 30TB NVMe SSDs (or whatever the current hotness ends up being when I'm rich enough to source massive NVMe drives).
Issue #1: Cooling; Oh god, how are you going to cool that thing?
Issue #1 Details:
- New AMD EPYC procs have TDP values around 240 to 280
- Chassis has (most liberal case) 2 inches between the motherboard and the top of the chassis
- 1U coolers tend to be passive blocks that require a ton of chassis cooling
- Active 1U coolers are awful and max out around 180w realistically
- 2U coolers are too tall
- Space is tight in all areas of the chassis
Issue #2: SP3/sTRX AIO Liquid Cooler; I can haz SP3 AIO Liquid Cooler? Spoiler alert...
Issue #2 Details:
- AM3/4/5 AIO? No prob. SP3/sTRX AIO? Ehhhhh...
- Extra length on the end of an AIO rad will almost certainly torpedo any chance of getting the thing in this chassis
Liquid Cooling Parts List:
- Rad: EK-CoolStream Classic SE 120
- Waterblock: EK-Quantum Magnitude sTRX4
- Reservoir/Pump: EK-Quantum Kinetic FLT 80 DDC PWM D-RGB - Plexi
Issue #3: Space is astoundingly tight; Where on earth do you plan on putting a radiator, reservoir, fans, and a pump?
Issue #3 Details:
- Reservoir/Pump Measurements:
- Height: 120mm / 4.75in
- Width: 80mm / 3.15in
- Depth: 64mm / 2.5in
- Radiator/Fans Measurements:
- Height: 120mm
- Width: 153mm
- Depth: 45-50mm
- Cavity near PSU Measurements (clearly not going to work for the res/pump):
- Height: 4.25in
- Width: 3.25-3.5in
- Depth: 2.25-2.5in
- Cavity behind drive cage (unbelievably tight and concerns regarding heat removal from drive cage, could maybe be used for res/pump but will lose a fan):
- Height: 4.75in
- Width (minus size taken by rad): 3.4in
- Depth: 2.5in (if we are being generous)