JBOD enclosures - 2.5 inch drives?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stoat

New Member
Feb 15, 2016
5
0
1
57
Yes, yes, I know, "I'm nuts"

Or am I?

We all know about the plethora of top loading 42-60-90 bay JBOD drawers that hit the market over the last 10-15 years, but the paradigm just changed.

At the low end, SMR drives are being submarined into the market in CMR clothing - and being pushed as RAID drives when they have appalling performance. This is pushing us into buying higher spec drives.

Meantime Micron have introduced a couple of new lines of SSDs such as the 5510 ION /5200 ECO range as archival/nearline units which aren't that much more expensive than those higher spec drives, but have power and seek ratings that eat them for lunch - and quite frankly if I don't see 10 year lifespans out of them I'd be highly surprised. (They're enterprise drives with power loss protection and 5 year warranties, unlike Samsung's QVOs, but priced 10% lower than the QVO range. Write performance is considerably lower than the Samsungs but they don't utilise an array of internal rewriting trickery to optimise write speeds, then sort it out in the background that the Samsungs use)

Micron are also rightfully pointing out that spinning drives wear out on reads whilst SSDs DON'T and the effective lifespan of most spinning media is around 0.2-0.3DWPD, so the low DWPD of these units doesn't matter if they're nearline/archive oriented (and they have higher spec SSDs available for not much more money anyway).

What's lacking to deploy these in anger is a suitable enclosure to stuff them into. Putting 60 of these into one of my existing JBOD enclosures would look more than a little silly, even if it would be an effective way of doing things and halve the drawer's power consumption.

So, the challenge now is "how about a suitable 3U 120-drive (or thereabouts) SAS enclosure with 12Gb/s expander, redundant PSU, decent thermals (drives must not exceed 70C internally - even if NAND likes being hot the controllers don't) and relatively low noise? (I try to keep my server room below 85dBa at 23C ambient)

Ideally case depth should not exceed 1000mm, which I feel is an easy target
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
What is your goal? If you need low performance flash, just increase your disk size to suit your needs. 15.36TB 2.5" SAS SSDs are readily available and 60TB 3.5" ones have been on the market for 3+ years now. Is 5.4PB in 4U not good enough?

There is no demand for your hypothetical chassis as you quickly hit performance bottlenecks. There are only so many PCIe lanes or SAS channels to go around in any given chassis after all. What benefit does a slow all flash storage array have over properly implemented tiered storage? You just wind up spending exponentially more for no added benefit.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
I would second the what is the end goal aspect - you can get off the shelf sas3 chassis with 72 or 88 real hot swap 2.5" bays for a few $k
https://forums.servethehome.com/ind...6-rjbod1-88-bay-2-5-sas2-sata-jbod-1-5k.6758/

And in 3U you could put 3 of 1029P-N32R | 1U | SuperServers | Products | Super Micro Computer, Inc. and have 96 NVMe drives and 3 complete systems. Or go with the next generation EDSFF / Ruler drives where we will see volume for drives with capacities > 16T per drive vs the 2.5" form factor.

If you are looking at putting $200k+ worth of SSDs in it (88x 15TB drives @ $2400/ea) - might as well get a real quality chassis vs a low volume / home brew one.
 

Stoat

New Member
Feb 15, 2016
5
0
1
57
1: I'm really looking to find professional chassis, not homebrew - and tossing it out there to see if anyone knows of them (yet)

2: Preferably a top loading 100+ ssd chassis - a smaller version of the Supermicro/Chenbro 3.5" top loaders - Top Loading Storage Servers | Supermicro (I know about the "simply double" chassis, but they don't hold enough drives for my use - not dense enough. These are akin to the older pre-toploader chassis of the early 2000s).

3: The new ranges of SSDs are cheap but they top out at 8TB - Micro 5210 IONs are rated at 0.1DWPD or thereabouts (it depends on block size) - 8Tb is just under $700(plus tax) in the UK, 4TB us $350 (5 year warranty, driver power loss protection) - these are aimed at taking on archival HDDs, with the 5100, 5300 and higher range NVME units for "nearline" or busy capacity, but I need 1-2PB of essentially archival storage and the price jump to these SSDs vs the running costs, seek speeds and vibration/reliability makes it worthwhile

EDSFF is there but _EXPENSIVE_ and massively overrated for my needs.

15TB SSDs (2.5") are a lot more epensive than these storage units and 3.5" SSDs have major heating/reliability problems - seriously, don't go there (Try actually obtaining those 60TB+ 3.5" drives)

This is about tuning the appropriate drive for requirements. As Micron have pointed out the average storage array HDD has a lifespan of around 0.2DWPD or less
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
You're mostly describing wants, not your underlying goal. You need to really think about your higher level requirements.

I would argue that the lower speeds and perceived reliability does not merit the considerably higher cost of flash storage for archival. I mentioned tiered storage and believe that would be the better route.

Second point is you are saying you are constrained on space, yet are looking at lower density disks. 16TB hard drives are going to offer higher density than your hypothetical chassis with 8TB SSDs and especially 4TB ones. As you are looking at spending $100k+ per server, why are you so constrained on rack space? Surely it's not at that much of a premium (and you're foregoing existing higher density solutions anyway).

Either way, I believe your hypothetical chassis does not exist due to the aforementioned points. If there was demand or a need for such a product, surely it would have been manufactured by now.
 
  • Like
Reactions: Blinky 42

Stoat

New Member
Feb 15, 2016
5
0
1
57
Tiered storage still requires devices. I'm looking at this, but I _can't_ tier to tape as the access time is too long and tiering archival to HDD means 5-9kW continuous power consumption

We're not constrained on rack space for the most part (although I only have room for 8 racks and) the bigger limit is thermals (55kW cooling MAX - but even if I blast past that there are only slightly higher limits on available power into the server room) and vibration, but what's really driving this is the appearance of _enterprise_ SSDs at this price point with a specific functionality (archive to nearline) and at pricing which makes sense vs enterprise mechanical drive TCO

Until 3 months ago they didn't exist - Micron have changed that.

Perhaps in another 6 months they may decide that topping out the ION range at 8TB was a mistake, but Samsung haven't released 8TB QVOs after nearly 18 months despite having the circuit boards ready to take the NAND and it will take a while for this kind of size to catch on. Nor have samsung pushed their 16TB+ enterprise 2.5" drives very hard (they're nice devices, I have some, but they're hideously expensive for what they do. 8TB is about the practical limit for SATA/SAS under most circumstances)


When you couple Micron's aggressive foray into big cheap enterprise SSD with the discovery that over the last 2-3 years WD and Seagate have been submarining ever increasing numbers of DM-SMR spinners into _ALL_ channels (including supposed enterprise ranges) without noting that they're SMR in the spec sheets, still positioning them as CMR/RAID (non-archival) drives, then getting _extremely_ cagey when pulled up on it, it's clear that we're getting close to the fabled knee point on adoption.

It gets worse when you realise that _some_ of those DM-SMR drives can be identified because they advertise TRIM capabilities even if they don't admit to being zoned devices, but the ones they're sliding into desktops and consumer NASes frequently have that function disabled too, so it's impossible to tell what the drives are other than benchmarking them - eg ST3000DM003. These DM-SMR drives are showing up with exceptionally short operational lifespans because they're being used in ways which are OK for CMR but which kill SMR units

Whilst it might _seem_ strange that going all flash on the archival/nearline end is where it would happen, the thermals and operating costs have a lot to do with it. I can put an array of SSDs to sleep and have them ready to read in 3-5 seconds, vs 30-40 seconds for spinning media (plus the associated wear/tear). That means the reduction in operating costs of such arrays (direct power consumption and cooling) are even less than the "50%" that shows up at first glance - and experience already shows that SSDs are far lower maintenance devices than mechanical hard drives, which means we can push out our replacement cycles from 5 years - HDDs tend to start hitting bathtub failure rates at about 6-7 years (we have some arrays older than this and it's hairy. Academic funding it difficult) - to longer periods

I suspect that when (not "if") EU regulatory action starts against WD/SG for their ongoing marketing stunts with DM-SMR drives a lot of users are going to be reevaluating their storage choices as they become aware of the shenanigans. DM-SMR making its way into things like lower capacity WD-REDs (2 and 4TB 3.5 inch) since the start of this year is going to give a lot of people a nasty surprise next time they go to resilver a ZFS array or rebuild a RAID6 (which is how I found out - I had to replace a drive on my home array and ZFS kept kicking the new unit out due to INDF errors - it's a firmware bug - same bug on several units - but digging showed the wider problem)

Our existing main storage kit hits end of contract in August. I've been very happy with it and was about to sign off on a purchasing cycle for about $350k of replacement storage kit, blissfully unaware of what was happening in spinning media. Now I'm having to start over. Part of that is seeing what's out there in JBOD cases because I need to see what the vendors will pitch.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
I didn't include tape in the mix. You could have tiered storage with NVMe and SAS SSDs for caching. As this is meant to be low use, that means you get SSD performance with a bunch of hard drives. Storage Spaces Direct will do this very well even with 2 nodes.

I'm not sure where you're getting 5-9kW power consumption, as it's going to be much lower than that. Even a pair of 90 bay chassis with attached hosts should keep you under 5kW. You should also keep in mind that flash storage is not necessarily going to be lower in power consumption or thermal load. For example, an Ultrastar DC HC530 is 6W idle and 8.5W active. That gets us 16TB (actually 18TB if you don't want SMR). Compare that to your 4TB SSDs and you're looking at 1.5W idle each and 3.6W active. Multiply that by 4 and they're using more power and producing more heat. ~530W per PB vs ~900W for just the disk. The SATA (instead of SAS) HDDs use even less and beat out the 8TB Microns when active still.

Even if hard drives have considerably higher failure rates, because they cost a fraction of flash, you could replace every single disk multiple times and you'd still come out ahead.

So, in the end, flash doesn't really have the advantage for what you want. Since you've confirmed that space is not a limiting factor, you should just get a conventional SAN of some sort or build out a cluster of 2-4U servers. S2D with a bunch of Supermicro systems will be fairly cheap.
 
Last edited:

Stoat

New Member
Feb 15, 2016
5
0
1
57
Did you miss the part where I pointed out that hard drives have significantly shorter lifespans than SSDs?
Or that power consumption and thermals ARE an issue?

You're comparing apples to twinkies (not even orange juice) with your SSD comparison:

Yes, the Micron SSDs (most SSDs actually) are 3-4W operational (actually writing) or 2.5W(reading - worst case), or 1.5W idle - but well under 0.5W when sleeping - and being SSDs, you can sleep the entire array after a couple of minutes - try doing that kind of MAID behaviour on HDDs and I'll show you a drive drawer being opened every month to replace units (one of our more rabidly "green" researchers decided this was a bright idea to save energy. Guess what happened?)

Additionally I've never been terribly happy with the reliability of our larger HDDs, the 12-18TB ones barely last their warranty period as a rule - and that's in certified enclosures.

As for HDDs costing a fraction of flash, 8TB _archival_ enterprise hard drives are about half the cost of these micron units. That's not a great saving.
The knee point for jumping from HDD to SSD is usually when they hit 5 times the price, making this something that is unlikely to go unnoticed, especially when our documented failure rate for SMR drives is significantly higher than CMR ones (which negates any capital savings, simply on labour costs - DM-SMR drives end up seeking like crazy when read unless sequentially filled on day one, so any claimed power savings over CMR are wiped out there too.

UK Insight price today:
Micron 5210 ION 7.68TB SATA SSD: £562.29 +VAT
Ultrastar HC530 14TB SATA CMR HDD £417.22 (6W idle, 9W for SAS - both closer to 14W on average when seeking and can peak significantly higher than that)
Ultrasrat HC310 8TB SATA HDD £227.58 - and they're 9W units AT IDLE

Power consumption is based on existing installations. Perhaps I should have mentioned that I have around 500 HDDs in action at the moment and we've been through 3 previous generations of drives-in-drawers (including such abysmal performers as Xyratex F5404 raid arrays)

Your lower figures are about right for _idling_ arrays, but when they're seeking it's a different story.

I've yet to have any enterprise flash drive die or even show performance degradation even after some of them have been in service for 8 years - and I'm quite happy to bet that by the time any of these 8TB microns die the replacements will be half the cost or less. Even consumer SSDs have generally been amazingly reliable after the first early iterations and I'm pulling 10 year old systems off desks with perfectly functional (but slow by current standards) Samsung 830 drives.

These Micron drives have been around a while but the trigger point here is that they're increased production whilst dropping the price by ~35% (presumably a result of the Intel divorce freeing up NAND capacity) and the reported RMA rate is _very_ low.


I think there's a bloodbath coming in the storage market this year and I think we'll see the demise of at least one HDD maker as a result
 

MJ Rodman

Member
Feb 13, 2020
30
12
8
So I'm actually looking for something similar, but on the small scale.

Why does NO ONE make 6/8/10-drive enclosures that are small enough to sit under a desk? I have a bunch of 2.5" drives that I can't seem to find anything to put them in that won't be a 3.5" shell.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
So I'm actually looking for something similar, but on the small scale.

Why does NO ONE make 6/8/10-drive enclosures that are small enough to sit under a desk? I have a bunch of 2.5" drives that I can't seem to find anything to put them in that won't be a 3.5" shell.
Throw one these into a mini-itx case?

MB516SP-B_2.5" HDD/SSD CAGES_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure
 

MJ Rodman

Member
Feb 13, 2020
30
12
8
I've never seen these compound mini-SAS cables before. If this enclosure uses those cables, what do they need on the computer interface side? SATA connectivity? One Computer SATA Port per mini-SAS cable? I would likely get a new controller card for this array, just want to make sure I get the right thing.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen

edge

Active Member
Apr 22, 2013
203
71
28
Dang, and I was feeling good over getting 6 2.5" drives in one 5" with an icy dock. I'm still using a few cooler master cases from 2007 which have 12 5" bays. I might get another 5 years out of them, then again if icy dock starts u.2 bays at that density it might be ten more.
 

itronin

Well-Known Member
Nov 24, 2018
1,233
793
113
Denver, Colorado
Yes, yes, I know, "I'm nuts"

Or am I?

So, the challenge now is "how about a suitable 3U 120-drive (or thereabouts) SAS enclosure with 12Gb/s expander, redundant PSU, decent thermals (drives must not exceed 70C internally - even if NAND likes being hot the controllers don't) and relatively low noise? (I try to keep my server room below 85dBa at 23C ambient)

Ideally case depth should not exceed 1000mm, which I feel is an easy target
I'm not intending to revive the debate I saw here earlier. What I'm linking is not 3U... But 144 drives in 4u?

Still need a system to manage it so your actual minimum RU is 5 at 144 drives. or 9 for 288 drives which should be very doable and is close to what you are looking for without having to manufacture anything. <-- unless that's part of the goal.

No idea on the depth of this bad boy but I'm guessing its less than 1M.