Embiggening The Home NAS: Rebuilding a Supermicro SuperStorage To Its Former Glory

pancake_riot

New Member
Nov 5, 2021
9
9
3
Build’s Name: The Way Overkill Home ZFS NAS
Operating System/ Storage Platform: Debian Linux 11 "Bullseye"
CPU: 2x Intel Xeon E5-2630 v4 (10c/20t)
Motherboard: Supermicro X10DRH-CT
Chassis: Supermicro SuperChassis 826BE1C-R920LPB
Drives:
  • 2x SK hynix Gold S31 500GB SATA3 2.5" SSDs - ZFS mirror, boot and root pools
  • 8x HGST Ultrastar 7K6000 4TB 7.2k SAS3 - ZFS RAIDZ2, data pool
  • 1x Samsung EVO 850 M.2 SATA3 SSD - Scratch drive (Plex transcoding, media conversion, etc.)
RAM: 64GB (4x 16GB) DDR4-2400 ECC
Add-in Cards: NVMe + SATA M.2 PCIe Card
Power Supply: 2x Supermicro PWS-920P-SQ 920W Platinum Super Quiet
Other Bits: Supermicro 2x2.5" rear drive bay

Usage Profile:
  • OpenZFS
  • Samba shares
  • Backups
    • Windows
    • Time Machine
    • Proxmox Backup Server
  • Docker Host
    • Plex
    • NextCloud
    • Syncthing
    • Seedbox for Linux torrents (For real!)

Notes
To date I've been running my home NAS as a ZFS storage server in a Proxmox VM. Since my lab back then consisted of a Dell R720 and an Intel NUC, I did this initially so I could use the excess capacity on my R720 to run other VMs.

I began to feel constrained by that setup for several reasons; namely, that I had to passthrough a number of devices to the ZFS VM for it to work, and it limited my ability to store Proxmox backups on my ZFS pool out of fear that Proxmox or the VM would give up the ghost one day and I'd be stuck in a catch-22.

Over time, I expanded my Proxmox cluster with a handful of Dell Optiplex Micro units. Now that my compute resources are more distributed, I'm more comfortable moving my NAS back to a physical host to simplify my storage arrangement.

A few weeks ago I stumbled upon the SM CSE 826 sweet deal thread in the Great Deals subforum. I was fortunate enough to snag a bare Supermicro chassis with a SAS3 backplane for a fantastic price. That was the little nudge I needed to begin my next build project.

Though I pieced this build together from various eBay listings and private sales, what I essentially did was rebuilt a SuperStorage 6028R-E1CR12T. As I was deciding on what to put in the chassis, I was actually leaning toward a desktop ATX motherboard with a Ryzen third-gen CPU for simplicity and cost savings. I wanted a system that had enough horsepower for multi-user Plex media serving and would have enough PCIe slots to fit a GPU for hardware transcoding, a SAS2 HBA, and a 10Gb NIC add-in card. However, while most recent desktop CPUs do have enough lanes to accommodate those needs, consumer ATX boards are sorely lacking in PCIe slot count and type.

To that end, I took another look at Supermicro's workstation and server board lineup. Though more expensive, workstations and servers often place a premium on expandability. Graduating from an R720, I was adamant about having DDR4 in this system, so that ruled out the X9 series and early X10 line. I also wanted to take advantage of potential power savings in moving from 2012-era Xeons on the 22nm process to relatively newer 2016 Xeons on the long-lived 14nm process.

As I was perusing the spec pages for my particular chassis, I noticed that they listed the exact models of motherboard that they shipped in the completed system, and the one my chassis came with - X10DRH-CT - was an exact fit for my needs. Dual integrated 10GbE NICs (despite being RJ45), ample PCIe 4x slots, and above all else, an onboard 8-port SAS3 HBA. Most importantly, secondhand prices for Xeon v3/v4 CPUs and compatible boards have dropped steadily over the last year as many businesses complete their typical 3-5 year refresh cycles.

Overall, I was shocked to see that the cost to rebuild this system to its original state was not too much higher than the cost of stuffing mostly consumer-grade hardware in there with an expensive dedicated SAS3 HBA and 10Gb NICs and hoping it all worked out.
 

pancake_riot

New Member
Nov 5, 2021
9
9
3
thumbnail_IMG_3564.jpg
The Supermicro racked up underneath the R720 it's replacing. And as an added treat, below it is my vintage SGI O2 running SGI IRIX 6.5.

thumbnail_IMG_3566.jpg
Top-down on the newly rebuilt SuperStorage. Coming from the R720, I really appreciate having a standard EATX layout to work with.

thumbnail_IMG_3569.jpg
Compared to the standard green LEDs on all my other gear, the vivid blue LEDs on the Supermicro drive sleds are a nice change.
 
  • Like
Reactions: itronin

bonox

New Member
Feb 23, 2021
21
6
3
Nice and neat - and I have a special soft spot for those old SGI boxes from my uni days last century.

How does cooling of the CPU, particularly the aft one work under full load without shrouds? Do they get hotter than the Dell with the air finding an easier exit than going through the heatsinks?
 
  • Like
Reactions: Patrick

pancake_riot

New Member
Nov 5, 2021
9
9
3
Nice and neat - and I have a special soft spot for those old SGI boxes from my uni days last century.

How does cooling of the CPU, particularly the aft one work under full load without shrouds? Do they get hotter than the Dell with the air finding an easier exit than going through the heatsinks?
At one point I had an SGI Octane as well - so heavy you'd think it was made of lead, and the thermals could pass for a space heater. The O2 is a lot easier to keep around for casual nostalgia, but the plastic skins have gotten so brittle that every time I go to move it, I worry that something will break off.

My Supermicro chassis did come with a shroud, and it works quite well to my surprise. Even with the shroud on, there's a fair bit of space on either side of the heatsinks above the RAM for the air to bypass, so nowhere near as controlled as Dell's shrouds. There's a 2°C delta between the two IPMI sensors under full load. Acoustics aren't that much worse than the R720 either, though I haven't gone through the hassle of tuning the fan curves yet.
 
  • Like
Reactions: Patrick