Advice: SAS3 enclosure with expander

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Superpos

New Member
Jun 9, 2020
25
1
3
I'm looking at getting a SAS3 enclosure with expander to run off a Linux render/file server with a built-in SAS3 controller flashed to IT mode, linked with a single 8644 cable. The application is mainly film post production, so 2k-4k image sequences that are ~5-40MB/frame. This would be for near line access to archived and library material and for doing production backups, rather than for long term archiving. So some bandwidth would be nice, hence SAS3 and not SAS2. My most bandwidth-heavy production requirements are using separate NVMe volumes. The server is currently going to be connected to a single workstation via 100G ethernet until switches get more affordable and I can scale out a bit.

Running as a NAS is not a priority right now, but I might try TrueNAS at some point running in a VM or more likely, get a NAS-dedicated computer later. I'm going to experiment with ZFS, specifically I'm thinking Raidz3 with some persistent L2ARC SSD cache.

My quotidian unit of storage per bay would be a HGST 8TB SAS3 drive backed by four SAS3 SSDs for the persistent L2ARC - enough cache to saturate the 12G bandwidth (~4.4GB/s). I'm also interested in the future to try using dual external 8644 ports to see how much extra bandwidth there is to the host machine.

Ideally I would have 24 bays which helps justify using HDDs across SAS3. I'm tempted to try a more DIY approach to save money, but I'm not an expert in that area. There seem to be quite a lot of large capacity enclosures available which have a SAS2 backplane and a power supply, then you add your own server mobo and expander. What I would need though, is to get one with a SAS3 backplane with 8643 connectors and then I would look to add probably an Areca 8028-24 expander module instead of a motherboard, and run 24 drives. I'm just not seeing any of these Norco type of expander JBODs that are SAS3 though.

Supermicro is of course an option using their own expander and backplanes, and they are quite plentiful on eBay. But from what I can tell from seeing many topics posted it's hard to get the fans running quietly, and drop-in Noctua replacents also don't seem to have solved the issue.

My dark horse alternative is a Gen 1 HGST 4U60. This thing is affordable, ready to go and ticks most boxes. Of course, it's overkill right now but it's got tons of expandability. It's the best deal out there, but just unwieldy - meaning it's ridiculously large especially since I gave up my small office and have my equipment in my apartment for now. Although I prefer the external 8644 ports on the Gen 2, I'm focusing on Gen 1 as apparently the fan noise levels are acceptable and it's certified to run with my chosen SAS3 drive.

Any advice?? Thanks!
 

gea

Well-Known Member
Dec 31, 2010
3,175
1,198
113
DE
My nr 1 would be something like a SC846BE1C-R1K03JBOD | 4U | Chassis | Products | Super Micro Computer, Inc.. For very heavy load I would use the same case without expander and miniSAS connectors for 3 x 8 port (or 16 port + 8port) HBA like the SC846XA-R1K23B | 4U | Chassis | Products | Super Micro Computer, Inc.

btw
L2Arc (persistent or not) does not improve access to sequential data but only small random reads like metadada or small io on a read last/read most strategy.

For sequential data raw pool performance is essential. If disks are too slow think of 12G SSD like the WD SS530.

If you think of ZFS use a HBA not a raidcard ex a BroadCom/LSI 9300
 
Last edited:

Superpos

New Member
Jun 9, 2020
25
1
3
Thanks, how is the fan noise with this?

Re: L2ARC. It won't hurt to run some. And my understanding is that with very fast flash speeds, the argument against sequential data goes away. But in addition, I will be randomly seeking different frames from file sequences non-sequentially across the volume and that's actually more important to me than 4k playback at 24fps. So I'm really looking at the L2ARC to cut down on HDD access times across a large RAID volume - where possible. But I'm not a ZFS expert. The snapshotting and other goodies are more important to me.

I don't want to go all SSD with ZFS because it's not fast enough for my kind of work application unless maybe you have the patience of Job and a phd in tuning it. And for a SAS3 or even SATA SSD, they're the same price as NVMe. So except for any cache on a SAS3 device, I have a separate non-ZFS NVMe setup where maximum performance is important.

Re: HBA. I have a SAS3008 controller on the motherboard and will be running in IT mode, and that will do fine. it's effectively the same thing as a HBA. I've routed one of the 8643 internal connectors to an external 8644 connector on a PCIe bracket.
 

Superpos

New Member
Jun 9, 2020
25
1
3
Also re: the Supermicro. With an expander is preferable to consolidate this down to a single SFF-8644 connecting cable - possibly two later. As mentioned, any critical bandwidth needs are being handled on a separate NVMe volume.

But, although this JBOD is handsome it's actually more expensive than the monster HGST 4U60. Admittedly not by much, and it's much smaller.
 

gea

Well-Known Member
Dec 31, 2010
3,175
1,198
113
DE
The fans will disturbe even in the next maybe overnext room.
If you can't accept, forget any 24 bay case or put it to a serverrom.

iops of a single mechanical disk is bad, around 100 iops. If you build a large pool (raid 5/6 or Z1-3) the whole array has the same number of iops. So a Raid-Z3 of 24 disks has also only 100 iops while sequential performance may scale with number of datadisks. A io sensitive workload give really bad results and as I said the Arc or L2Arc readcache will not help beside caching metadate (find data faster). For a fast multiuser or multitrack access, iops is what you need.

Compare:
A good SSD (ex WD SS530) has around 200k iops. The best of all NVMe, the Intel Optane is at 500k iops. Sequentially the difference is smaller ex 250 MB/s from a disk and 500 MB/s from a SSD and 1000+ MB/s from NVMes.

btw
The HGST 4U60 is fine especially when you can get it for cheap. But do not forget, a cheap HGST 4U60 is not a regular offer.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,175
1,198
113
DE
Not a regular offer but "End of life" and only a few GB.
 

Superpos

New Member
Jun 9, 2020
25
1
3
Appreciate the positive note on the 4U60! Something to think about.

Re: NVMe and iops, I'm good - I got some enterprise U.2 drives for my separate fast working volume.

As I said, this SAS3 JBOD venture is for nearline access to a library and possibly for archived projects, and backup of what's on the NVMe volume. Plus some scratch space and room to grow. I need >50TB of storage space for the device more than I need top performance. However, if I can get something respectable out of SAS3 then why not? The SSD cache on top, while not a priority, is something to experiment with. I do want it exactly for the reason you state, to find data faster due to the iops limitations of the HDDs rather than sequential read performance which is more the domain of scaling out the data drives.

My previous mass storage device has been a Synology 8 drive desktop unit which has been super quiet, and I understand rack gear is loud. But that doesn't mean that something can't be done to try and bring fan noise down. SSD for data drives is definitely a good possibility to try and get the whole thing running more quietly, but I can't justify the cost for a mass storage application like this. Maybe eventually.

It does seem to me that a 16-24 bay setup could be built myself for less than $1000 by adding an expander to one of the "almost built" chassis kits though. If I can get hold of one that has a SAS3 backplane. I've been looking at iStarUSA, Norco etc but it seems it's all 6G.
 
Last edited:

gregsachs

Active Member
Aug 14, 2018
565
193
43
If you don't really need the pure bandwidth of SAS3, you can save a ton via SAS2 config, something like a netapp 4246 can be bought for ~$350 on ebay shipped. Cable to adapt sff8633 to qsfp+ is another $30.

There is a 16 bay sas-3 supermicro jbod on ebay, 750+ shipping.
 

Superpos

New Member
Jun 9, 2020
25
1
3
Right, I notice SAS2 is a lot cheaper and I'm tempted, but I'm going to try and do this with SAS3 as I will benefit.

I did actually see this eBay listing, it's one I've considered in the last few days.
 

Superpos

New Member
Jun 9, 2020
25
1
3
Well it has redundant PSUs so that helps, but they have a support office in NJ (I'm in NY).
 

Superpos

New Member
Jun 9, 2020
25
1
3
I'm waiting for this to arrive. Apparently it will run pretty loud, so I'll be looking at replacing the hot swap fans with Noctuas then looking at the web UI to reduce fan speed. And also probably sticking it in the closet.
There's definitely something to be said for the Teradici workstation approach where everything is living in a designated server room so that the workspace is quiet, but I need 10-bit color and I don't think the new PCoIP Ultra supports it yet. The PCIe cards definitely don't. Additionally, their cloud access licensing model is not really favourable to a single user.

Edit: so now I see that a quiter fan is 60x25mm vs the built-in 60x38mm fans in this case. It's obviously going to be a bit of an adventure to make this run more quietly. At least it's a large chassis and pretty empty, so I'll see what's what.
 
Last edited:

Superpos

New Member
Jun 9, 2020
25
1
3
It looks like I'll need to return the AIC JBOD because the drive trays require a SAS interposer whereas I would be adding native SAS3 drives. It's quite handsome and in good condition too. I did try screwing in the drive in various positions since there are a number of mounting points, but I can't get it to line up with the SAS backplane. In the manual there's an alternative tool-less tray type for native SAS drives. My feeling is that buying these, if they can be sourced, would actually be pretty expensive and it would be better to throw that money at a different setup.

In the meantime, I've been revisiting/diving deeper into making a Supermicro chassis more quiet. I'm still thrown off by how many posts and topics here and on r/homelab over the years where people have expressed difficulty with this.

However I only just found out about the SM Platinum SQ PSU's. Also the Noctua low-noise adapters and/or fan controllers, rather than necessarily replacing the fans themselves. Also from what I gather, a cheap server motherboard would be better than a JBOD board because there are more fan headers and more control over fan speed. This all makes it sound possible.