Are you looking for best looking NAS enclosure?

Is this the best looking vertical NAS or not?


  • Total voters
    8
  • Poll closed .
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jang430

Active Member
Mar 16, 2017
264
22
28
51
Just want to share that ever since I started to get interested in NAS, my idea of how it should look like should be horizontal, like Synology 8 bays, and Qnap 8 bays. The only other vertical form factor I found that really appeals to me is Truenas Mini, and the 8-bay version. There are a lot of other 4-bay and 8-bay options, Inwin, Chenbro, Silverstone DS380, but none give me the look that I like. Maybe it's the lights in the front, maybe it's because one of the NAS icons Truenas themselves use the case you see below.

1679024221735.png

I started my nas journey using ugly cases that can handle a 2-3 drives without hotswap, then finally settled for Fractal Node 804. This is a beast! 10 3.5 inch drives without having to be creative. And room for another 2 more 2.5 inch drives. That's a total of 12 drives in one cube.

1679023198660.png

I used this for some time, until finally, I went with Qnap TS-873A, ZFS 8-bay nas.

1679023307205.png

I guess I wasn't satisfied with my Fractal Node 804 beast after all :)

I'm no longer actively searching, but am always interested how people's nas look like, or how their homelab look.

If you agree with me, I like the Truenas Mini look for vertical chassis :D So why am I writing this post? It's because I found where to get the 4 bay and 8 bay NAS chassis used by Truenas!

There's a bit of a challenge, it's in Taobao, a Chinese website, and you'll have to be creative to purchase this enclosure. But I believe my NAS journey will end once I own one of them. For now, Qnap will suffice.

1679023947818.png


Price is reasonable too. Are others as passionate with their NAS chassis? Let me know if this helps.
 

Attachments

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Why not just use the ds380b?
No window, no LEDs, holds 8x 3.5" and 4x 2.5".
Very basic, and vanilla case.

I like the DS380B after trying a lot of others, and desktop cases.
I'd like a "perfect" mATX \ ATX 8x 3.5 with 1 5.25" and room for internal 2.5"

For bigger storage, the SuperMicro 846 or 847 I like to use too
 

jang430

Active Member
Mar 16, 2017
264
22
28
51
Why not just use the ds380b?
No window, no LEDs, holds 8x 3.5" and 4x 2.5".
Very basic, and vanilla case.

I like the DS380B after trying a lot of others, and desktop cases.
I'd like a "perfect" mATX \ ATX 8x 3.5 with 1 5.25" and room for internal 2.5"

For bigger storage, the SuperMicro 846 or 847 I like to use too
DS380 doesn't have the leds :D That's a dealbreaker for me. But Fractal Node 804 works with Micro ATX, without the hot swap.
 

jang430

Active Member
Mar 16, 2017
264
22
28
51
That's just an Ablecom CS-T80: ABLECOM | 大訊集團

They're the OEM for Supermicro chassis/power supplies and closely related. The CEOs of Ablecom and Supermicro are brothers.
Indeed, I've long known it's an Ablecom case. But I cannot find it in retail. Ablecom/ Supermicro- brothers is a fun fact. Thanks.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
+1 for supermicros 846 as big nas chassis

for "smaller"* nas I would use a sm 745 chassis:
- supports almost all mainboard sizes from mini itx to eatx
- supports hotswap u.2/u.3 with the correct backplane
- compatible with other supermicro parts
- can house one or more gpus
- can be pretty quit (see the sq skus)
- easy servicable compared to the ablecom chassis linked in this thread

I'm using one currently with a 920sq psu and a bpp (the ups that can be shoved in a psu slot) :D

* based on numbers of hdds, not chassis size
 
  • Like
Reactions: jang430

heromode

Active Member
May 25, 2020
380
201
43
For homelab use, i've decided to go a different route. I already have a dual Xeon E5 server. That has ample resources to run a bunch of HDD's. Instead of investing in a case + PSU + mobo + CPU + RAM + SAS controller + NIC, i've gone the Icy Box + external Mini-SAS 8088 route.

External shielded SAS cables ( SFF-8644 to SFF-8088) Come up to atleast 2 meters.

a standard internal LSI 3008 card runs 2x4 SAS/SATA drives, plus you have the option of running both internal and 4x external etc..

Icy Box has different versions with SFF-8643, 4x SATA, or 8X sata connectors for running SAS disks on redundant HBA's etc.. These are meant to be installed in 3 or 4x 5.25" slots, but i'm gonna put some rubber feets under my IB-564SAS-12G, and run it externally, with a AC power adapter with molex output from china. Still waiting on budget to order the cables and pcie slot adapter, but presumably SATA hotplug should work, so i can just unplug the disks in a live system , switch to another pack of 4 disks, remove the stack easily for safe storage, and other such handy options.

the 4 or 5 bay icy docks with different host connectors are IB-564SAS-12G, IB-564SSK, IB-565SSK, IB-554SSK

ZhenLoong Storage Store on Aliexpress also has plastic similar things, but the Icy Docks are aluminium and top quality.

Edit: a 16 port LSI card would give a lot of flexibility, run 12 disk externally, 4 disk internally etc.. Also instead of doing pcie passthrough for the whole LSI controller, in proxmox you can easily passthrough individual disks to the VM's you need using qm set [vmid] -scsi[X] , and qm unlink [vmid] --idlist scsi[X] commands. That way you can link and unlink disks to any VM also live.
 

Attachments

Last edited:

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
For homelab use, i've decided to go a different route. I already have a dual Xeon E5 server. That has ample resources to run a bunch of HDD's. Instead of investing in a case + PSU + mobo + CPU + RAM + SAS controller + NIC, i've gone the Icy Box + external Mini-SAS 8088 route.

External shielded SAS cables ( SFF-8644 to SFF-8088) Come up to atleast 2 meters.

a standard internal LSI 3008 card runs 2x4 SAS/SATA drives, plus you have the option of running both internal and 4x external etc..

Icy Box has different versions with SFF-8643, 4x SATA, or 8X sata connectors for running SAS disks on redundant HBA's etc.. These are meant to be installed in 3 or 4x 5.25" slots, but i'm gonna put some rubber feets under my IB-564SAS-12G, and run it externally, with a AC power adapter with molex output from china. Still waiting on budget to order the cables and pcie slot adapter, but presumably SATA hotplug should work, so i can just unplug the disks in a live system , switch to another pack of 4 disks, remove the stack easily for safe storage, and other such handy options.

the 4 or 5 bay icy docks with different host connectors are IB-564SAS-12G, IB-564SSK, IB-565SSK, IB-554SSK

ZhenLoong Storage Store on Aliexpress also has plastic similar things, but the Icy Docks are aluminium and top quality.

Edit: a 16 port LSI card would give a lot of flexibility, run 12 disk externally, 4 disk internally etc.. Also instead of doing pcie passthrough for the whole LSI controller, in proxmox you can easily passthrough individual disks to the VM's you need using qm set [vmid] -scsi[X] , and qm unlink [vmid] --idlist scsi[X] commands. That way you can link and unlink disks to any VM also live.
Keep in mind that SATA has a pretty strict 1 meter cable length limit - and that's inside a chassis. Even having a chassis open can be enough to cause some connections with longer cables to start throwing errors.
 

heromode

Active Member
May 25, 2020
380
201
43
Keep in mind that SATA has a pretty strict 1 meter cable length limit - and that's inside a chassis. Even having a chassis open can be enough to cause some connections with longer cables to start throwing errors.
afaik this is carrying sata standard over SCSI standard though, as these are connected to LSI SAS controllers. I don't think it's the same as a standard sata cable to motherboard sata connector, but i'm no expert. I plan on using one meter SFF-8088 to SFF-8644 cables (8644 is shielded 8643)
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
afaik this is carrying sata standard over SCSI standard though, as these are connected to LSI SAS controllers. I don't think it's the same as a standard sata cable to motherboard sata connector, but i'm no expert. I plan on using one meter SFF-8088 to SFF-8644 cables (8644 is shielded 8643)
You're confusing several different things:
  1. SAS and SATA are only compatible in the sense that controllers (almost?) always support SATA as well. The physical and logical layers are very different. So much so that SATA disks attached to SAS controllers typically use the host's SCSI stack instead of the ATA stack.
  2. If there's a SATA device on one end of a link and a SAS device on the other, then naturally the link has to be SATA, because SATA devices do not speak SAS. That means dealing with all of SATA's limitations for that last link.
  3. Between SAS expanders and SAS HBAs, SATA traffic is carried encapsulated inside normal SCSI traffic, using SAS signalling. This allows standard JBOD/disk expansion chassis with SAS expanders to be wired up to the host without being impacted by SATA's limitations, which then apply only between the expander IC and the disk (so at most couple of tens of centimeters over a PCB).
I plan on using one meter SFF-8088 to SFF-8644 cables
That alone eats up all of the 1 meter specification. Sure, SFF-8087/8 or SFF-8643/4 are significantly less crappy than the typical SATA cable, but that will only get you so far and you still have to deal with:
  • The internal cables (on both sides, where applicable)
  • The SFF-8087/8088 (and SFF-8643/8644 adapters, where applicable)
  • The disk backplane
So yeah, expect suckiness.
 

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
Isn't cooling on those a disaster?
Not at all. I mean, I added a cardboard flap inside to better direct some air, but I have mine in an 80F non-airconditioned laundry room for the last 5+ years running freenas and a dozen VMs.

Disk temps are more than fine. HDD and SSD max out around 42C

1679061783842.png
 

jang430

Active Member
Mar 16, 2017
264
22
28
51
For homelab use, i've decided to go a different route. I already have a dual Xeon E5 server. That has ample resources to run a bunch of HDD's. Instead of investing in a case + PSU + mobo + CPU + RAM + SAS controller + NIC, i've gone the Icy Box + external Mini-SAS 8088 route.

External shielded SAS cables ( SFF-8644 to SFF-8088) Come up to atleast 2 meters.

a standard internal LSI 3008 card runs 2x4 SAS/SATA drives, plus you have the option of running both internal and 4x external etc..

Icy Box has different versions with SFF-8643, 4x SATA, or 8X sata connectors for running SAS disks on redundant HBA's etc.. These are meant to be installed in 3 or 4x 5.25" slots, but i'm gonna put some rubber feets under my IB-564SAS-12G, and run it externally, with a AC power adapter with molex output from china. Still waiting on budget to order the cables and pcie slot adapter, but presumably SATA hotplug should work, so i can just unplug the disks in a live system , switch to another pack of 4 disks, remove the stack easily for safe storage, and other such handy options.

the 4 or 5 bay icy docks with different host connectors are IB-564SAS-12G, IB-564SSK, IB-565SSK, IB-554SSK

ZhenLoong Storage Store on Aliexpress also has plastic similar things, but the Icy Docks are aluminium and top quality.

Edit: a 16 port LSI card would give a lot of flexibility, run 12 disk externally, 4 disk internally etc.. Also instead of doing pcie passthrough for the whole LSI controller, in proxmox you can easily passthrough individual disks to the VM's you need using qm set [vmid] -scsi[X] , and qm unlink [vmid] --idlist scsi[X] commands. That way you can link and unlink disks to any VM also live.
That's certainly another way to do it :)
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
For homelab use, i've decided to go a different route. I already have a dual Xeon E5 server. That has ample resources to run a bunch of HDD's. Instead of investing in a case + PSU + mobo + CPU + RAM + SAS controller + NIC, i've gone the Icy Box + external Mini-SAS 8088 route.

External shielded SAS cables ( SFF-8644 to SFF-8088) Come up to atleast 2 meters.

a standard internal LSI 3008 card runs 2x4 SAS/SATA drives, plus you have the option of running both internal and 4x external etc..

Icy Box has different versions with SFF-8643, 4x SATA, or 8X sata connectors for running SAS disks on redundant HBA's etc.. These are meant to be installed in 3 or 4x 5.25" slots, but i'm gonna put some rubber feets under my IB-564SAS-12G, and run it externally, with a AC power adapter with molex output from china. Still waiting on budget to order the cables and pcie slot adapter, but presumably SATA hotplug should work, so i can just unplug the disks in a live system , switch to another pack of 4 disks, remove the stack easily for safe storage, and other such handy options.

the 4 or 5 bay icy docks with different host connectors are IB-564SAS-12G, IB-564SSK, IB-565SSK, IB-554SSK

ZhenLoong Storage Store on Aliexpress also has plastic similar things, but the Icy Docks are aluminium and top quality.

Edit: a 16 port LSI card would give a lot of flexibility, run 12 disk externally, 4 disk internally etc.. Also instead of doing pcie passthrough for the whole LSI controller, in proxmox you can easily passthrough individual disks to the VM's you need using qm set [vmid] -scsi[X] , and qm unlink [vmid] --idlist scsi[X] commands. That way you can link and unlink disks to any VM also live.
Not to discourage you...but...

That is a disaster waiting to happen.

Do it once, do it right, so you don't have to ever do it again. There's many reasons why people don't do this, opting instead for server chassis that are built to handle this.

For e.g., my setup is 2x 48 bay Chenbro chassis (modified PSUs and fans, making them much quieter). They are rock solid, no vibration issues, compact, fit in a rack and will last for years.

p.s. This pic was while building up the rack, the pattern is full now.

image002[10].jpg
 

heromode

Active Member
May 25, 2020
380
201
43
Do it once, do it right, so you don't have to ever do it again
Yeah, no thx. I used to have a full size rack in my living room. No more. now i have 2x be quiet pure base 500 cases, plus one wyse 5070 extended with opnsense. THAT'S IT. one pure base 500 hosting a dual E5-2680v4 proxmox server, with 2x Quadro P620's in passthrough for 2 4K@60 desktops., another running main desktop on baremetal. 2x10GB link with solarflare between the boxes, no switch, sr-iov partitioned for the vm's on proxmox.

wyse5070 running a quadport i350 nic, only for WAN access. so no 10gbit SFP+ switch needed, that's half the cables, 50 watts gone. 3 desktops feeding 2x 4K screens, keyb/mouse via barrier. i've gone from like 6 boxes and 40 cables to 2 boxes plus a small wyse, and less than 10 cables. And it's whisper quiet.

I'm confident my SATA rust will work just fine over one metre 8088 to 8644 cables in the icy docks. That's yet another big bunch of cables and components gone. I want small, quiet and simple. I'm getting old :(

edit: never meant to interrupt the thread, just wanted to make the alternative point of external mini-SAS and 5.25" mounted HDD backplanes instead of a separate server.. it's alot less cables, components and money, especially for amateur home use.
 
Last edited:

jang430

Active Member
Mar 16, 2017
264
22
28
51
@heromode not at all. I'm happy to hear arguments like these :D So as long as everything stays friendly. The way I see it, either you go full blown rack mount, or full blown desktop, using all the small Tiny Mini Micros. Further down the road, maybe even several RPis to serve all our homelab needs.