Build’s Name: nas01
Operating System/ Storage Platform: FreeNAS 11.2
CPU: Intel(R) Pentium(R) CPU G4620 @ 3.70GHz (4 cores)
Motherboard: SUPERMICRO X11SSL-F
Chassis: Custom. 5mm polycarbonate. Made to fit specific dimensions.
Drives: 16 WD Red 3TB, 4 Samsung EVO 860 500GB SSDs, 4 Seagate Baracuda 1TB
RAM: 16 GB Kingston DDR4 ECC
Add-in Cards: LSI 3008 SAS3, Intel RAID Expander RES3CV360 36 Port SAS/SATA 12Gb, Intel x550-T2 10Gb dualport ethernet NIC.
Power Supply: Corsair RMx750 watt PSU
Other Bits: 10x Corsair ML120 fans, 3.5 inch slot fan controller for 4 sets of fans
Usage Profile:
Other information:
Background
Originally I had a QNAP TS-639 which served my needs quite well for many years. However, I grew out of it, and the cost of procuring a bigger one was a quite big onetime investment.
I was also somewhat hesitant to just expand by replacing disks with bigger ones in the QNAP, as I would eventually be limited by the 6 slots available and I was fearing that the NAS would soon break down (it's around 9 years old now, so...). My concern was also flexibility; 6 or even 12 slots does not give you much room to change the setup if needed (Raid, transforms, expansion)
Also, by using established brands, you're tied into their ecosystem - i.e. if a PSU, PCB or backplane would break down, I would most likely be forced to purchase an equivalent part at a premium from that vendor. That goes for the software too in many cases (hence the choice of FreeNAS).
So, first I investigated a 24-slot DAS (Supermicro, xcase.co.uk) to connect to my existing servers. Again, too expensive and somewhat tie-in with vendor.
I then build my own in plywood with room for 24 disks, which worked quite well - until I had more than 8 disks in it - it simply became too hot (mostly due to some design-flaws in terms of ventilation).
So, I designed a new one:

Frontview

Backview
This time with room for 32 disks in two compartments and PLENTY of fans.
The concept
Instead of a DAS, I now wanted a dedicated system just for storage. Also, I wanted as good ventilation as possible without too much noise. So, many big and slow fans, where the airflow should be compartmentalized and directed efficiently.
Three compartments:
The cabinet is built from clear 5mm polycarbonate. The major parts have been ordered cut to size. The other parts have been cut by myself. Advantage of the polycarbonate is the strength, resistance to cracks and the simple way of working with it. Everything has been glued together with glue made specifically for polycarbonate. Extremely strong bindings!!
Each disk has its own little bracket on the bottom of the case, preventing the disks from sliding around. In that bracket, 2mm isolating rubber has been glued to bottom to reduce vibrations. There is nearly no vibration sound from the cabinet or the disks.
The cabinet is designed to fit between a door and the shelf for my servers - 15,5 cm clearance and the NAS is 13,2cm high. The last few centimeters are to be used for a drawer-system underneath the shelf where my servers and backbone switch is located.
![20181216_175013989_iOS.heic[2].jpg 20181216_175013989_iOS.heic[2].jpg](https://forums.servethehome.com/data/attachments/9/9855-6cb29c3809ce5af69e03a9baf876e781.jpg)
Also, ventilation exhausts are in the back, designed to direct hot air out and up behind the shelf through some holes in the backside of the shelf (still to be cut).
Finally
So the NAS is done – just waiting for some more hardware and the drawer system to be delivered, so that I can pull the NAS out from beneath the shelf and service it from the top.
At that point I’m installing the 10Gb NIC and direct-connect the ports to my servers (one port for each). Each of the servers also have a direct 10Gb connection with each other (Running Server 2019 Hyperconverged Hyper-V cluster with S2D). All in all, 10Gbit between all the relevant hosts in my network. Connection out from servers and NAS to the rest of the network is bonded 2x 1Gbit.
For now it’s installed above our washing machine and dryer:


Still waiting for the last disks, SSDs and the SAS expander to arrive.
At that point, the setup will be:
1. 2x ZFS RAIDZ2 with 8x3TB disks in each (Main storage, 24-disk compartment)
Or maybe just 2x ZFS RAIDZ1 with 6x3TB and then order more disks for another 6x3TB. Not decided yet.
2. 1x ZFS Striped+Mirrored with 4x500GB SSDs (Extra VM iSCSI store, 8-disk compartment)
3. 1x ZFS Striped+Mirrored with 4x1TB disks (Backup of VMs, 8-disk compartment)
The cabinet has been fully tested in terms of cable management. There is adequate room and it’s quite easy to replace a disk when required. However, to further improve airflow and accessibility, I’m in the process of creating my own power-cable harnesses for the disks, which essentially removes the MOLEX power connectors from the 24-disk compartment, freeing up quite some space (24x molex connecters take up A LOT of real estate).
So in conclusion.
1. Very efficient airflow, low noise
2. Fast NAS!
3. Low price (materials: about 3.000DKK/450USD – About half to 1/3 the price of a DAS/NAS cabinet)
4. Plenty of room to expand
5. Cheap/easy to repair (all standard HW and easy to replace/procure). No vendor lock-in.
6. Very fun to do!
Very satisfied with the results. Details such as the gluing of the pieces could be a lot better, though. I’m planning to spray-paint the entire cabinet and maybe include a FreeNAD logo near the on/off button in that process. Just for kicks
Hope you enjoyed the reading and maybe got some inspiration!
Operating System/ Storage Platform: FreeNAS 11.2
CPU: Intel(R) Pentium(R) CPU G4620 @ 3.70GHz (4 cores)
Motherboard: SUPERMICRO X11SSL-F
Chassis: Custom. 5mm polycarbonate. Made to fit specific dimensions.
Drives: 16 WD Red 3TB, 4 Samsung EVO 860 500GB SSDs, 4 Seagate Baracuda 1TB
RAM: 16 GB Kingston DDR4 ECC
Add-in Cards: LSI 3008 SAS3, Intel RAID Expander RES3CV360 36 Port SAS/SATA 12Gb, Intel x550-T2 10Gb dualport ethernet NIC.
Power Supply: Corsair RMx750 watt PSU
Other Bits: 10x Corsair ML120 fans, 3.5 inch slot fan controller for 4 sets of fans
Usage Profile:
- Media Storage
- System Backups (VMs from Hyper-V, transfer to cloud storage from the NAS for offsite storage)
- 1 VM in Bhyve (Server 2019 Domain Controller out of 3 in the AD forest)
- Homedrives
- PC Image backups
- FileHistory backups
- iSCSI storage for certain other workloads
- Encrypted offsite remote backup endpoint for family and friends
- Member of local on-premise AD.
Other information:
Background
Originally I had a QNAP TS-639 which served my needs quite well for many years. However, I grew out of it, and the cost of procuring a bigger one was a quite big onetime investment.
I was also somewhat hesitant to just expand by replacing disks with bigger ones in the QNAP, as I would eventually be limited by the 6 slots available and I was fearing that the NAS would soon break down (it's around 9 years old now, so...). My concern was also flexibility; 6 or even 12 slots does not give you much room to change the setup if needed (Raid, transforms, expansion)
Also, by using established brands, you're tied into their ecosystem - i.e. if a PSU, PCB or backplane would break down, I would most likely be forced to purchase an equivalent part at a premium from that vendor. That goes for the software too in many cases (hence the choice of FreeNAS).
So, first I investigated a 24-slot DAS (Supermicro, xcase.co.uk) to connect to my existing servers. Again, too expensive and somewhat tie-in with vendor.
I then build my own in plywood with room for 24 disks, which worked quite well - until I had more than 8 disks in it - it simply became too hot (mostly due to some design-flaws in terms of ventilation).
So, I designed a new one:

Frontview

Backview
This time with room for 32 disks in two compartments and PLENTY of fans.
The concept
Instead of a DAS, I now wanted a dedicated system just for storage. Also, I wanted as good ventilation as possible without too much noise. So, many big and slow fans, where the airflow should be compartmentalized and directed efficiently.
Three compartments:
- Main disk compartment - 24 diskslots
Push/pull configuration. (3x3 fans) - Server compartment - for the processing and the SAS expander
Push configuration. (2x fans) - Auxiliary compartment with 8 diskslots, fancontroller and PSU(s)
Push configuration. (2x fans)
The cabinet is built from clear 5mm polycarbonate. The major parts have been ordered cut to size. The other parts have been cut by myself. Advantage of the polycarbonate is the strength, resistance to cracks and the simple way of working with it. Everything has been glued together with glue made specifically for polycarbonate. Extremely strong bindings!!
Each disk has its own little bracket on the bottom of the case, preventing the disks from sliding around. In that bracket, 2mm isolating rubber has been glued to bottom to reduce vibrations. There is nearly no vibration sound from the cabinet or the disks.
The cabinet is designed to fit between a door and the shelf for my servers - 15,5 cm clearance and the NAS is 13,2cm high. The last few centimeters are to be used for a drawer-system underneath the shelf where my servers and backbone switch is located.
![20181216_175013989_iOS.heic[2].jpg 20181216_175013989_iOS.heic[2].jpg](https://forums.servethehome.com/data/attachments/9/9855-6cb29c3809ce5af69e03a9baf876e781.jpg)
Also, ventilation exhausts are in the back, designed to direct hot air out and up behind the shelf through some holes in the backside of the shelf (still to be cut).
Finally
So the NAS is done – just waiting for some more hardware and the drawer system to be delivered, so that I can pull the NAS out from beneath the shelf and service it from the top.
At that point I’m installing the 10Gb NIC and direct-connect the ports to my servers (one port for each). Each of the servers also have a direct 10Gb connection with each other (Running Server 2019 Hyperconverged Hyper-V cluster with S2D). All in all, 10Gbit between all the relevant hosts in my network. Connection out from servers and NAS to the rest of the network is bonded 2x 1Gbit.
For now it’s installed above our washing machine and dryer:


Still waiting for the last disks, SSDs and the SAS expander to arrive.
At that point, the setup will be:
1. 2x ZFS RAIDZ2 with 8x3TB disks in each (Main storage, 24-disk compartment)
Or maybe just 2x ZFS RAIDZ1 with 6x3TB and then order more disks for another 6x3TB. Not decided yet.
2. 1x ZFS Striped+Mirrored with 4x500GB SSDs (Extra VM iSCSI store, 8-disk compartment)
3. 1x ZFS Striped+Mirrored with 4x1TB disks (Backup of VMs, 8-disk compartment)
The cabinet has been fully tested in terms of cable management. There is adequate room and it’s quite easy to replace a disk when required. However, to further improve airflow and accessibility, I’m in the process of creating my own power-cable harnesses for the disks, which essentially removes the MOLEX power connectors from the 24-disk compartment, freeing up quite some space (24x molex connecters take up A LOT of real estate).
So in conclusion.
1. Very efficient airflow, low noise
2. Fast NAS!
3. Low price (materials: about 3.000DKK/450USD – About half to 1/3 the price of a DAS/NAS cabinet)
4. Plenty of room to expand
5. Cheap/easy to repair (all standard HW and easy to replace/procure). No vendor lock-in.
6. Very fun to do!
Very satisfied with the results. Details such as the gluing of the pieces could be a lot better, though. I’m planning to spray-paint the entire cabinet and maybe include a FreeNAD logo near the on/off button in that process. Just for kicks
Hope you enjoyed the reading and maybe got some inspiration!
Last edited: