Build’s Name: The Way Overkill Home ZFS NAS
Operating System/ Storage Platform: Debian Linux 11 "Bullseye"
CPU: 2x Intel Xeon E5-2630 v4 (10c/20t)
Motherboard: Supermicro X10DRH-CT
Chassis: Supermicro SuperChassis 826BE1C-R920LPB
Drives:
Add-in Cards: NVMe + SATA M.2 PCIe Card
Power Supply: 2x Supermicro PWS-920P-SQ 920W Platinum Super Quiet
Other Bits: Supermicro 2x2.5" rear drive bay
Usage Profile:
Notes
To date I've been running my home NAS as a ZFS storage server in a Proxmox VM. Since my lab back then consisted of a Dell R720 and an Intel NUC, I did this initially so I could use the excess capacity on my R720 to run other VMs.
I began to feel constrained by that setup for several reasons; namely, that I had to passthrough a number of devices to the ZFS VM for it to work, and it limited my ability to store Proxmox backups on my ZFS pool out of fear that Proxmox or the VM would give up the ghost one day and I'd be stuck in a catch-22.
Over time, I expanded my Proxmox cluster with a handful of Dell Optiplex Micro units. Now that my compute resources are more distributed, I'm more comfortable moving my NAS back to a physical host to simplify my storage arrangement.
A few weeks ago I stumbled upon the SM CSE 826 sweet deal thread in the Great Deals subforum. I was fortunate enough to snag a bare Supermicro chassis with a SAS3 backplane for a fantastic price. That was the little nudge I needed to begin my next build project.
Though I pieced this build together from various eBay listings and private sales, what I essentially did was rebuilt a SuperStorage 6028R-E1CR12T. As I was deciding on what to put in the chassis, I was actually leaning toward a desktop ATX motherboard with a Ryzen third-gen CPU for simplicity and cost savings. I wanted a system that had enough horsepower for multi-user Plex media serving and would have enough PCIe slots to fit a GPU for hardware transcoding, a SAS2 HBA, and a 10Gb NIC add-in card. However, while most recent desktop CPUs do have enough lanes to accommodate those needs, consumer ATX boards are sorely lacking in PCIe slot count and type.
To that end, I took another look at Supermicro's workstation and server board lineup. Though more expensive, workstations and servers often place a premium on expandability. Graduating from an R720, I was adamant about having DDR4 in this system, so that ruled out the X9 series and early X10 line. I also wanted to take advantage of potential power savings in moving from 2012-era Xeons on the 22nm process to relatively newer 2016 Xeons on the long-lived 14nm process.
As I was perusing the spec pages for my particular chassis, I noticed that they listed the exact models of motherboard that they shipped in the completed system, and the one my chassis came with - X10DRH-CT - was an exact fit for my needs. Dual integrated 10GbE NICs (despite being RJ45), ample PCIe 4x slots, and above all else, an onboard 8-port SAS3 HBA. Most importantly, secondhand prices for Xeon v3/v4 CPUs and compatible boards have dropped steadily over the last year as many businesses complete their typical 3-5 year refresh cycles.
Overall, I was shocked to see that the cost to rebuild this system to its original state was not too much higher than the cost of stuffing mostly consumer-grade hardware in there with an expensive dedicated SAS3 HBA and 10Gb NICs and hoping it all worked out.
Operating System/ Storage Platform: Debian Linux 11 "Bullseye"
CPU: 2x Intel Xeon E5-2630 v4 (10c/20t)
Motherboard: Supermicro X10DRH-CT
Chassis: Supermicro SuperChassis 826BE1C-R920LPB
Drives:
- 2x SK hynix Gold S31 500GB SATA3 2.5" SSDs - ZFS mirror, boot and root pools
- 8x HGST Ultrastar 7K6000 4TB 7.2k SAS3 - ZFS RAIDZ2, data pool
- 1x Samsung EVO 850 M.2 SATA3 SSD - Scratch drive (Plex transcoding, media conversion, etc.)
Add-in Cards: NVMe + SATA M.2 PCIe Card
Power Supply: 2x Supermicro PWS-920P-SQ 920W Platinum Super Quiet
Other Bits: Supermicro 2x2.5" rear drive bay
Usage Profile:
- OpenZFS
- Samba shares
- Backups
- Windows
- Time Machine
- Proxmox Backup Server
- Docker Host
- Plex
- NextCloud
- Syncthing
- Seedbox for Linux torrents (For real!)
Notes
To date I've been running my home NAS as a ZFS storage server in a Proxmox VM. Since my lab back then consisted of a Dell R720 and an Intel NUC, I did this initially so I could use the excess capacity on my R720 to run other VMs.
I began to feel constrained by that setup for several reasons; namely, that I had to passthrough a number of devices to the ZFS VM for it to work, and it limited my ability to store Proxmox backups on my ZFS pool out of fear that Proxmox or the VM would give up the ghost one day and I'd be stuck in a catch-22.
Over time, I expanded my Proxmox cluster with a handful of Dell Optiplex Micro units. Now that my compute resources are more distributed, I'm more comfortable moving my NAS back to a physical host to simplify my storage arrangement.
A few weeks ago I stumbled upon the SM CSE 826 sweet deal thread in the Great Deals subforum. I was fortunate enough to snag a bare Supermicro chassis with a SAS3 backplane for a fantastic price. That was the little nudge I needed to begin my next build project.
Though I pieced this build together from various eBay listings and private sales, what I essentially did was rebuilt a SuperStorage 6028R-E1CR12T. As I was deciding on what to put in the chassis, I was actually leaning toward a desktop ATX motherboard with a Ryzen third-gen CPU for simplicity and cost savings. I wanted a system that had enough horsepower for multi-user Plex media serving and would have enough PCIe slots to fit a GPU for hardware transcoding, a SAS2 HBA, and a 10Gb NIC add-in card. However, while most recent desktop CPUs do have enough lanes to accommodate those needs, consumer ATX boards are sorely lacking in PCIe slot count and type.
To that end, I took another look at Supermicro's workstation and server board lineup. Though more expensive, workstations and servers often place a premium on expandability. Graduating from an R720, I was adamant about having DDR4 in this system, so that ruled out the X9 series and early X10 line. I also wanted to take advantage of potential power savings in moving from 2012-era Xeons on the 22nm process to relatively newer 2016 Xeons on the long-lived 14nm process.
As I was perusing the spec pages for my particular chassis, I noticed that they listed the exact models of motherboard that they shipped in the completed system, and the one my chassis came with - X10DRH-CT - was an exact fit for my needs. Dual integrated 10GbE NICs (despite being RJ45), ample PCIe 4x slots, and above all else, an onboard 8-port SAS3 HBA. Most importantly, secondhand prices for Xeon v3/v4 CPUs and compatible boards have dropped steadily over the last year as many businesses complete their typical 3-5 year refresh cycles.
Overall, I was shocked to see that the cost to rebuild this system to its original state was not too much higher than the cost of stuffing mostly consumer-grade hardware in there with an expensive dedicated SAS3 HBA and 10Gb NICs and hoping it all worked out.