Cheapest SAS/SATA disk enclosure

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

raylangivens

Member
Nov 22, 2016
39
1
8
48
The typical backplane in a 4-in-3 unit has lousy air-flow design.
So you did test all of those, from all different manufacturers? Or did you just pull that "typical" from somewhere without frequent exposure to sunlight? Yeah, I'd wager that the latter is the case.

You clearly have never even seen or used an Icybox 4-in-3 or 5-in3 unit in action. There are absolutely no airflow-problems there.
The same goes for the Zalman drive cages with backplanes.

That fact aside, I'll take "lousy airflow" over risky cabling any time. After all, airflow isn't that important, at least as long as you don't exceed the specs in the datasheet. Running disks "hot" doesn't affect the MTBF, Google debunked that myth a couple of years ago.

Yes, on a small scale, those ultra-low-end DIY solutions may work surprisingly well and even be sometwhat reliable. At least if you set them up once and then never touch them again.
But they are in no way superior to a proper prosumer or enterprise grade solution, except maybe noise for the latter.

Do you know why I bought those MD1000s?
I had 28 drives individually cabled in two DIY enclosures connected to the exact same controller that now drives the MD1000s. Properly cooled, of course and with high-end data cables (SAS-SATA breakout).

That setup ran fine for a couple of weeks, but then issues occurred with random drives dropping out of the RAID6 arrays very frequently.
I checked the disks - they were fine.
I switched out the data cables - no change.
I switched out the data cables again, now using the external ports of the controlleer - no change
I switched out the PSU with a beefier one (550W to 800W) - no change
I added another PSU - no change
I switched out the power cables from "molex-to-sata-lose-all-your-data"-1->2 to quite expensive 1->5, reducing the amount of connections significantly - issues went down a bit, but not even near my definition of "reliable".
By then I had sunk at least the price of one MD1000 (with trays!) into cables and PSUs.

The lack of backplanes and the sheer number of cables meant that checking one connection without touching / moving / disturbing at least one other neighbouring connection was nearly impossible. There's no amount of cable management wizardry that could prevent that. A true troubleshooting nightmare.

Want to guess how many times I've had a drive drop out of the exact same RADI6 arrays since I moved them into the two MD1000s?
 

matt52

New Member
Sep 24, 2017
8
0
1
52
yes, yes MD1000 uber alles. we get it. And yes I've owned several IcyBox units over the years. Gross air-flow is not the problem, localized air-flow is the issue. Google might run the air temps hot but their air-flow rate is damn serious and they engineer the enclosures to flow air where it needs to go. The SM847 for instance moves a lot of air, sure, but it moves air badly and the drives get stinking hot and fail if you don't keep the inlet temps down.
 

raylangivens

Member
Nov 22, 2016
39
1
8
48
Don't put words in my mouth. Makes you look petty.
I never said or implied that MD1000s are the non-plus-ultra solution for everyones storage demands.
I just provided reasoning as to why they are, for now and probably the next few years, the best solution for my storage demands.

I've had plenty of IcyBox (xxxSSK) units in my customers servers over the past 10 years. Never did a disk get "stinking hot" in them, even with 40°C inlet temp. Same goes, again, for the Zalman drive cages.

I'd rather be able to hot-swap risk- and hassle-free than have 3°C lower drive temps. But those are just my priorities, your's may be different and the next guy's may even be in between.