JBOD Enclosure - SATA drive support - how to tell?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

darkdub

New Member
Mar 28, 2013
5
0
1
Hi all --

Starting to plan my next file server build, and I want to do something a bit more elaborate than my current setup (which is basically a mid-tower case with 3 of the 3 5.25" to 5-3.5" bay converters, giving me a total of 15 drives).

For the moment, I'm thinking of having a 4U enclosure with redundant power to hold the disks, using external mini-SAS connectors back to my main server. I've seen 4U enclosures with anywhere from 45 drive bays up to 60, and I'm sure there are probably some that hold more.

The AIC XJ3000-4603S states that to support the 60 hot-swap drive bays, it has "LSI SAS2x28 + LSI SAS2x36 per expander tray", and 4 x Mini SAS (SFF-8088) per expander tray as well. This is what I'm looking at currently, but I want to clarify a couple of things that I think would apply to any JBOX / drive enclosure.

1) I'm concerned with drive compatibility. I want to be able to use whatever I have around currently (which is likely to be a mix of 2TB and 3TB drives), but in the future I'd like to be able to populate some of the slots with 10TB, 12TB or greater drives and not worry that they aren't supported. How can I determine what the maximum drive size support would be, or does that even come into play with an enclosure like this?

2) If there are multiple SFF-8088 interfaces, would that mean that I could potentially connect some of the drives to one server, and another set of the drives to another? Or is this considered bad practice?

I want to follow more of a process on this new build - my current file server, while stable for the past 3+ years, has one drive fastened with velcro to the power supply, and two more connected with screws to the top of the case (where a fan should be). It's not elegant. It's hacky. Not that it really bothers me, but I want to plan a bit more for this build.

That's it. Appreciate any and all info you can provide.

Thanks!
 

gbeirn

Member
Jun 23, 2016
69
17
8
123
1) Max drive size depends on your SAS card support. 2TB was the limit on older cards. It if supports a 3TB drive it'll work with anything.

2) Technically I wouldn't see why not. You would need to find out what interfaces go to which expanders and then to which drives. You'd never be able to turn it off unless both servers were off. Could complicate troubleshooting and documentation.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
1) Drive size compatibility also depends on the expander in the backplane. In this case it seems the expander chips are LSI SAS2 based and are the same as the ones used in Supermicro chassis, so you should be fine on drive size assuming as mentioned by @gbeirn that your controller supports it.

2) The chassis you linked supports this configuration from what I see in the guide located here. Page 10 shows it supports 1, 2 or 4 host configurations. Further down in the guide it shows how the drives get split depending on the configured number of hosts.

All in all it's a pretty neat chassis. If you get it be sure to post some pics and let us know how it works.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,512
5,800
113
That AIC chassis is very cool BTW. I have seen it a number of times now.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
That's an interesting looking chassis you've found there, supporting the kinds of features you'd probably not need at home. The documentation is also rather lacking in my opinion.

As far as drive compatibility goes, you should be fine to stick pretty much anything into it, assuming the controllers in your server also support the drives. Though to use the second set of expanders (the second "expander tray" to use their terms) you will need to use dual-port SAS drives - single-port drives will only connect to a single tray.

Connecting multiple hosts to it though is a far more complicated area. Reading through the user manual, I suspect each tray is actually built from a single SAS2x28 expander chip plus two SAS2x36 expanders - a single 28 and 36 doesn't give them enough ports. To support 60 drives plus four 4-lane mini-sas ports out the back they need 76 SAS lanes, which is more than 28+36, and doesn't include any lanes for connecting the expanders to each other. Instead I believe that what they call the "hub" in the user manual is the x28 expander, with 16 lanes connected out the four mini-sas ports on the back of the tray, and the remaining 12 lanes split with 6 to each "edge" expander. The "left edge" and "right edge" expanders would then each be a x36, having 6 lanes connected to the "hub", and 30 lanes for drives. Although there are multiple expanders inside the system, and multiple mini-SAS ports out the back, each tray is a single SAS domain with access to all 60 drives from any of the external ports.

If you want to split up the drives, to map some to one server, and some to a second server, you need to configure zoning - one of those topics that is very rare to hear about in a home environment. You will need to connect to the tray(s) with a serial connection to configure the expanders, and can then choose from one of the zoning options that they support, as they don't appear to have enabled enough access to do fully custom zoning where you could decide exactly which drives are available in each zone. You get to pick from 3 options, where either all 60 drives are in a single zone, they are split into two 30-drive zones, or they are split into four 15-drive zones. Enabling zoning will also potentially cause compatibility problems with your HBA(s) - you will want to be using LSI HBA/RAID cards if you are going to use zoning.

You also have the option of connecting multiple servers without zoning, which would mean that all of the servers will see all of the drives at the same time. Normally this would be a BAD thing, but under certain conditions is what you want. It will require either that the disks are formatted with a clustered filesystem (eg. VMFS) to control sharing between servers, or the servers will have to be members of a cluster (eg. windows clustering) to control which server "owns" which disk at a time.

Another thing to keep in mind that I didn't see mentioned in the user manual, is cooling of the drives. With the drives in a very high density configuration as this shelf uses, air will tend to short-cut through any empty space instead of being forced through the very small gaps between drives. These types of high density JBOD's usually require that you install entire rows of drives at a time to close up any large gaps that air could flow through to force the air through all the tiny spaces between drives - in the case of this enclosure that would mean installing drives 15 at a time. For home use you can do whatever you want without worrying about the vendor cancelling your support contract on you due to an unsupported configuration - but do keep cooling and airflow in mind when placing drives into the enclosure. Don't just start filling from the left side and stop 1/2 way or you will probably have problems - if you only have enough drives for 1/2 of one row to start, then double-space them all the way across.
 

darkdub

New Member
Mar 28, 2013
5
0
1
You bring up some interesting points, TuxDude. Thank you for the detailed response.


That's an interesting looking chassis you've found there, supporting the kinds of features you'd probably not need at home.
Eh... "Need" vs. want-to-tinker-with. AT&T Gigapower is rolling out in the area, so my hope is that in a couple years I'll have gigabit to the house and will be able to have some decent access here. Either way, I want fast, local storage.


As far as drive compatibility goes, you should be fine to stick pretty much anything into it, assuming the controllers in your server also support the drives. Though to use the second set of expanders (the second "expander tray" to use their terms) you will need to use dual-port SAS drives - single-port drives will only connect to a single tray.
So, I'm not entirely sure I follow you here. Can I boil this down to saying, with SATA drives, the enclosure only supports 30 drives total? If so, that would be a bust for me. I want more density of storage for SATA drives.

Assuming that's not the case, I would probably start with 26 drives. Good suggestions on the spacing - cooling will be important.

Thanks again for the info.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
So, I'm not entirely sure I follow you here. Can I boil this down to saying, with SATA drives, the enclosure only supports 30 drives total? If so, that would be a bust for me. I want more density of storage for SATA drives.
No - you are fine to install the full 60 drives that the enclosure supports.

In the rear of that enclosure are two "expander trays", each of which has four mini-sas ports for you to connect to your server(s). The way the expanders are internally wired, one of those trays will connect to port 1 of all 60 drives, and the other tray will connect to port 2 on all 60 drives. So, if you are using single-port drives (all SATA, or single-port SAS), then one of the trays (the one that connects to port 2 on the drives) will not be connected to anything, and if you plug a server into the mini-sas ports on that tray it won't see any drives.

You are basically paying for an entire set of redundant SAS expanders that you can't use with SATA drives. I don't know if it is possible to order that enclosure with only a single tray installed - but if it is possible and you will only be using SATA drives, then do that and save a bit of money. If you do get both trays you will have to guess which one to connect your server to to see the drives - I would guess the top one. And you will only ever be able to use four of the mini-sas ports on the rear, not the full eight of them.