2U SSD Optimized Hot swap Chassis?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

aarcane

Member
Feb 16, 2016
35
1
8
39
I've seen a lot of 2U semi-hot-swap chassis with 24-ish drives mounted vertically across the front. That's nice and all. But they're usually optimized for those bulky spinning-rust 12.5mm drives, and they're only semi-hot-swap because you have to:
  1. Pop the drive out
  2. unscrew the old drive
  3. screw in the new drive
  4. replace the drive
So what I'm looking for for my home server overhaul seems fairly simple.. I want a 2U rack with a full 7 low profile PCI card slots, and at least 48 drive slots across the front. These slots should be/have:
  • for 7mm drives
  • trayless
  • physical presence detect
  • on an expander and/or backplane
  • drive ID and activity LEDs
It also occurs after some back-of-the-envelope math, that putting an extra 2 identical sots, but wired for SATA/SAS passthrough instead of to the HBA would be trivial. a 48+2 chassis should be simple, and with more and more both houses, small businesses, and even enterprises going full flash, I'm surprised I haven't seen a flood of these chassis both in OEM and generic form flooding the market yet.

Can anybody recommend such a case, or an assortment of such cases for me?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
There are many 2.5" 15mm SAS SSDs. People building large flash arrays will usually go SAS over SATA since there is a small price premium and you get bandwidth, dual port (if needed), and the native SAS infrastructure when using SAS HBAs.
 
  • Like
Reactions: T_Minus

BackupProphet

Well-Known Member
Jul 2, 2014
1,092
649
113
Stavanger, Norway
olavgg.com
For example with Supermicro you should get A or TQ chassies.
An LSI SAS2 controller gets overloaded with more than 4 SSD drives as it can push maximum around 350k iops at 4kb.
A LSI SAS3 controller can do a million iops, so is therefore a better choice.

EDIT:

Oh you want a trayless... can't help you there :(
 

aero

Active Member
Apr 27, 2016
346
86
28
54
Not to insult or anything, but do you have much experience with replacing drives? I wouldn't consider trayless an advantage really. How often do you need to swap drives, and when you do does the extra 2 minutes make a difference?

Every piece of enterprise grade storage I've touched uses drive trays, and I think for good reason. They are very solid, connectors always line up precisely, and there is no lever to risk breaking in the chassis.

Trayless usually depend on a lever mechanism to eject the drives which is usually pretty flimsy.

I have some drobo boxes where it's very hard to get the drives out because these failed.
 

aarcane

Member
Feb 16, 2016
35
1
8
39
People building large flash arrays will usually go SAS over SATA since there is a small price premium
I've priced both SAS and SATA, and it seems that someone goofed and decided to only sell 512e instead of 4kn drives in SAS formats, and only for about a 50-100% price premium. If 50% on $1000 is small, then I don't want to see what a large premium would be. The price gap increases when you approach the 2-4T drives that I'm wanting to load up on.

How often do you need to swap drives, and when you do does the extra 2 minutes make a difference?
With 48 drives, after about 3-5 years, I'd be expecting to start replacing each and every drive in the chassis before they reach EOL and die spontaneously. Furthermore, every minute the drive is out of the pool, another drive is doing extra duty to compensate for it's absence. That's a minute that a failure is more likely, and with failure, data loss. The goal should be to absolutely minimize down time in a drive swap or replacement operation.

Trayless usually depend on a lever mechanism to eject the drives which is usually pretty flimsy.
You're right, if manufacturers cheap out and the lever only reaches part way across the tray, or is made of cheap plastic, that can suck quite a bit. If only there was a way to attach the trays only to the side screws, and do so at the front end of the drives, using some mechanism that's held in place by static tension, and can't easily detach *inside* the slot but *can* easily detach outside the slot without tools, and/or could be easily replaced if it's lost or misplaced, and doesn't add additional height to the drives themselves, that would be swell.

For reference, I presently screw a small twist tie to the side of my SSDs before I slot them in. It can be done before removing the dying drive, uses *one* screw, and the tie itself slots neatly into the gap to either side of the SSD. The springiness of the wire makes it pop out when you open the front cover, and a single pull and push promptly replaces the drive in question with sometimes not even enough time for Linux to register that the drive was missing, and a new device node is registered instead of the old one reused. Convenient.
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
That's a minute that a failure is more likely, and with failure, data loss. The goal should be to absolutely minimize down time in a drive swap or replacement operation.
Buy a extra disk trays, install the new disk into the spare disk tray,
eject the old disk that you want to replace, insert the new disk.
 
  • Like
Reactions: Cole

aarcane

Member
Feb 16, 2016
35
1
8
39
Buy a extra disk trays, install the new disk into the spare disk tray,
eject the old disk that you want to replace, insert the new disk.
Yes, that solves one problem. Another problem to consider, however, is drive density. Each additional 24 drive chassis doubles the CPU, RAM, Motherboard, PSU, etc. requirements to drive that chassis, and increases the overall cost per drive Modern SSDs draw significantly less power and produce significantly less heat than their 15K rotational ancestors. There's little reason to *not* be packing them denser and reducing overall costs.
 

aarcane

Member
Feb 16, 2016
35
1
8
39
There are many 2.5" 15mm SAS SSDs.
I just checked a few OEM pages for top datacenter SSD manufacturers (Samsung DC, and Micron DC series'), and one *only* made 7mm SAS SSDs, the other made primarily 7mm SAS ssds, and had a few legacy 15mm products still in production.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
@aarcane If you are buying new drives, the list prices of SAS drives will be higher but usually with bigger discounts on configured systems.

A bit strange but that is how the industry works.

Also, as another thought, SATA is going to be significantly harder to get 18 months from now. Late 2017 or early 2018 we will stop seeing new drives released with SATA. NVMe will become the primary interface and SAS for legacy compatibility. The SAS3 interface is also 12.0gbps which provides more headroom over SATA. You can see the first picture A new PCIe Switch Option: The Microsemi Switchtec PCIe 3 Switch for the IDC projections.
 

aarcane

Member
Feb 16, 2016
35
1
8
39
@Patrick Ugh.. I've been looking for a nice NVME HBA for a while. They just don't exist. Nobody wants to put out a raid card with U.2 connectors along the top or front.

Still, wrt SATA, I hope you're right. It's hung around for *far* too long. It's time for it to go the way of the dodo. This connector segregation between "home" and "enterprise" markets is a crock of BS. It costs manufacturers more to maintain two separate platforms than they save in raw materials costs on the "lower end" SATA devices, just to be able to artificially inflate the prices on the "Enterprise" side of things. I really am hopeful that NVME and U.2 will finally fix this problem. Still.. I don't see massive storage scale SSDs jumping straight to NVME right away. I don't care about throughput, and IOPS aren't a huge issue. Honestly for my purposes a single NVME drive or a pair of SA(S/TA) drives in mirror would provide more than enough IO and Throughput. My focus is on capacity, latency, and power. Right now, SA(S/TA) mirrors provide that balance sufficiently well.

Speaking of the "Enterprise/Consumer" divide.. Since NVME *will* effectively eliminate that, I foresee manufacturers supporting both SAS and SATA for as long as they can milk them.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
The big issue is that client PC's are all moving NVMe. Dell/ HP servers have SAS. So the market for SATA is much smaller.

And, "still.. I don't see massive storage scale SSDs jumping straight to NVME right away." 4 quarters from now come back to this and you will laugh. High capacity SSDs coming are NVMe.
 

aarcane

Member
Feb 16, 2016
35
1
8
39
And, "still.. I don't see massive storage scale SSDs jumping straight to NVME right away." 4 quarters from now come back to this and you will laugh. High capacity SSDs coming are NVMe.
Simply high capacity is not enough. we need 24 and 48 and 50 drive hot swap capable chassis and backplanes. we need OEMs and comodity builders to set standards. we need to be able to pop an NVME SSD into a slot and have it pop up in /dev. We need HBAs and RAID cards and switches and expanders. We need external enclosures. We need DAN and NAN to be production ready, stable, and widely available off the shelf. Until all those criteria are met, we won't have *storage scale* NVME deployments. Yes, we'll see large NVME SSDs. Mainly for caching and edge content delivery. They'll be for the LIVE data set. Hot only. Not the warm, or even cold data. Until all the above criteria are met, NVME will still be *performance* focused, not *storage* focused.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Simply high capacity is not enough. we need 24 and 48 and 50 drive hot swap capable chassis and backplanes. we need OEMs and comodity builders to set standards. we need to be able to pop an NVME SSD into a slot and have it pop up in /dev. We need HBAs and RAID cards and switches and expanders. We need external enclosures. We need DAN and NAN to be production ready, stable, and widely available off the shelf. Until all those criteria are met, we won't have *storage scale* NVME deployments. Yes, we'll see large NVME SSDs. Mainly for caching and edge content delivery. They'll be for the LIVE data set. Hot only. Not the warm, or even cold data. Until all the above criteria are met, NVME will still be *performance* focused, not *storage* focused.
October 1 we will have a cluster in DemoEval from a company using NVMe. Their focus is big data analytics storage clusters and they are using commodity 1U 10-bay servers and 100GbE.

They priced out, and went with NVMe versus their previous SATA cluster after using this system:

Price was the same at 2TB drive capacities.

We will see much higher adoption for when all of the ODM's re-design for next year's platforms. Dual port is available but we will see adoption in 2017.

Actually, one of the biggest issues right now is power. 25w NVMe SSDs create problems for infrastructure meant for 4-8w per 2.5" bay SSDs. A 2U 48-bay is 48 x 25 = 1.2kW without CPUs, PCIe switches cooling, PSUs and etc. If you had a dual E5 system in there with RAM and fast enough networking you have a system that will potentially be 1kW/ U.
 
  • Like
Reactions: Chuntzu and T_Minus

aarcane

Member
Feb 16, 2016
35
1
8
39
That's actually pretty epic. I hope NVME continues to grow like that. I'd love to see more of it in the storage space.