Which cases offer the best air circulation for 9 or more internal 3.5" hard drives?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I received the final three WD Green 3TB drives late yesterday. I installed them last night and did the first three steps of the FreeNAS hard drive burn-in overnight. I'm presently running badblocks on all 9 drives in parallel, and about 1 hour into it (about another 6 hours to go), it's becoming obvious that the drives are heating up less under badblocks than they were yesterday under smartctl's longest self-test: smartctl -t long /dev/adaX

What else can I do that would be closer to a worst-case burdening of the drives? Or was "smartctl -t long" already the worst case scenario? I want to test my foundation before building on it, if you know what I mean.
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
badblocks will basically run a (mostly) sequential write/read over the drive; since this means there's little movement of the actuator. If you want to give them a maximum, try formatting a drive and giving an fio randrw test across a 2.8TB test file (probably best to limit the time to a couple of hours though else the drive will have died of old age before the test finishes ;)) - should give you a reasonably expectation of the worst the drive will ever run into. Never run fio on BSD before but I know it's available.

Given you're using WD greens I assume you've remembered to apply idle3ctl -d on them beforehand?
 
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Given you're using WD greens I assume you've remembered to apply idle3ctl -d on them beforehand?
Nope. It hadn't come up in my reading, and this is the first time I've ever even heard of it. Now that you've mentioned it, though, I just now checked the idle count on each of the nine drives. All of them were double digits, but one was over 600. I was puzzled why there was a significant difference in the completion times of the different drives when yesterday I ran the "smartctl -t long /dev/adaX" test, so maybe this has something to do with it.

Another odd thing about the drive with the high cycle count is that the number of hours it claims to have been powered on is 10 hours longer than any of the other drives. Come to think of it, I did have one of them hooked up to a motherboard as a test drive when I was first experimenting with FreeNAS, so that must have been the one. Somehow the extra load cycles were a byproduct of that. Therefore, I'm guessing FreeNAS must have been waking it up quite often (about once every minute it would seem), even though the vast majority of the time it wasn't actually in use.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
After a little re-configuring this is what I've settled on for the Nanoxia Deep Silence 6 enclosure:
nas1.jpg
The photo shows nine drives with two 140mm fans on the right, and two 140mm fans on the left, arranged in a push-pull configuration.

At first I thought the two fans on the right were intended to blow through the drives, but the drives are mostly packed so close together I don't think that's happening much. Also, if that was the design intent, I think Nanoxia would (or should) have included some kind of shroud on the two right-hand fans to better direct the airflow. As it is, I think the airflow mostly hits the wall of drives and then flows around it, which implies the airflow needs to make at least two abrupt 90 degree turns. I do wonder about the wisdom of this airflow design, as compared to the twelve hundred case, where the airflow appears to move in a comparatively straight path and would seem to contact a greater surface area of each drive, and without heating from tightly packed drives above and below.

All 9 drives are currently running badblocks, which is consuming 120 watts (verus 75 while idle). With the fans turned up to high (potentially 1100 RPM), the highest temperature drive registers as 32C with the case closed. With the case open and the left fans turned off, it was about 8C higher than with the left fans turned on.
 
Last edited:
  • Like
Reactions: noleman1010

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Awesome. So techicall you could ramp up t0 3k rppm fans if needed :D
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Nope. It hadn't come up in my reading, and this is the first time I've ever even heard of it. Now that you've mentioned it, though, I just now checked the idle count on each of the nine drives.
Ah well hopefully you've caught it then - unsure if the LCC issue is as prevalent under BSD/FreeNAS as it is under linux but once the idle timer is disabled I don't think there's much in the way of a functional difference between the WD greens and reds. I've certainly had no problems running greens in my mdadm RAID arrays for the last five years (indeed I still have many 5yr old 2TB greens in active use) but from reading the internet you'd think that putting a green in the same building as a RAID array would bring about armageddon.
 
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Ah well hopefully you've caught it then - unsure if the LCC issue is as prevalent under BSD/FreeNAS as it is under linux but once the idle timer is disabled I don't think there's much in the way of a functional difference between the WD greens and reds. I've certainly had no problems running greens in my mdadm RAID arrays for the last five years (indeed I still have many 5yr old 2TB greens in active use) but from reading the internet you'd think that putting a green in the same building as a RAID array would bring about armageddon.
Yes, I'm very glad you brought it up, because I was oblivious to it. Thank you! Apparently Reds also have a timer value, but it's longer than the 8 second default that the Greens use. This posting makes a case for setting the timer to 300 seconds rather than completely disabling it. Either way, I'm curious why FreeNAS is touching a drive an average of about once every minute for seemingly no reason. Is it just keeping tabs that it hasn't died? Or is it part of its setup ritual? It would be mildly ironic if the very act of checking so often contributes to a drive's premature death.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
The LCC issue on reds I'm rather doubtful about; I've never had a single red with the idle timer activated/visible, I think what might have happened is that some early revs of the 4TB came with a firmware which got some hands a-waving - I've never seen it myself.

Spindown's never been an issue for me simply because I never enable it; I use few enough drives at home that the power-saving is negligible (~4W per drive for my 6TB's) and personal anecdata tells me it's a net loss for reliability and I've got enough crons and whatnot running on my main arrays that if I configured head parking and/or spindown on a 1hr idle basis I'd only get about 3-4hrs a day of power savings out of them.

ZFS I'm not familiar with (don't like it due to its inflexibility and patiently waiting for btrfs to become feasible) so it's possible it has its housekeeping routines but I think it's just as likely that FreeNAS is doing some maintenance too. Anyway, glad you found the info useful and hopefully your greens will be as reliable as minSDG4$%3rgff#++++NO CARRIER
 
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
The LCC issue on reds I'm rather doubtful about; I've never had a single red with the idle timer activated/visible, I think what might have happened is that some early revs of the 4TB came with a firmware which got some hands a-waving - I've never seen it myself.

Spindown's never been an issue for me simply because I never enable it; I use few enough drives at home that the power-saving is negligible (~4W per drive for my 6TB's) and personal anecdata tells me it's a net loss for reliability and I've got enough crons and whatnot running on my main arrays that if I configured head parking and/or spindown on a 1hr idle basis I'd only get about 3-4hrs a day of power savings out of them.

ZFS I'm not familiar with (don't like it due to its inflexibility and patiently waiting for btrfs to become feasible) so it's possible it has its housekeeping routines but I think it's just as likely that FreeNAS is doing some maintenance too. Anyway, glad you found the info useful and hopefully your greens will be as reliable as minSDG4$%3rgff#++++NO CARRIER
Glad you mentioned that, because now I'll simply disable it rather than set the timers to 300.

So, which FS are you currently using? I'm surprised ZFS never evolved to be more flexible. It certainly had a long enough head-start that it should have. After BTRFS gets triple-parity working, I'll be migrating to BTRFS.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
When it comes to airflow, the less cables getting in the way, the better. So, I'm ordering a refurbished fully modular power supply, the Corsair RM450:


to get some useless cables out of the way. To that end, I'm hoping to use a couple of SATA power extenders (StarTech PYO4SATA 1.31 ft 4x SATA Power Splitter Adapter Cable - Newegg.com ):


to cover all 9 drives with just one modular plug-in to the power supply. I'm not sure how well the wire will handle that. Anyone happen to know? If it doesn't test out well, then in a future build I may try to make my own custom cabling for the SATA power connections. Anyone done here that, and if so, any recommendations as to the best connectors to use in making it? Especially if there are any that would accept heavier gauge wire, I'd be really interested to know.

With the HX1050 I noticed a lot of heat going into the case, because its fan rarely turned on. So, in this case, going fanless maybe isn't a great idea. This may prove to be an issue with the RM450 as well. If worse comes to worst, I may use a couple pico psu's to power it all, to keep most of the psu heat outside of the case, but I'll try the RM450 first. Or, if someone wants to nominate a better power supply with an extremely quiet fan that runs continually, I'd be interested to hear suggestions.
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
So, which FS are you currently using? I'm surprised ZFS never evolved to be more flexible. It certainly had a long enough head-start that it should have. After BTRFS gets triple-parity working, I'll be migrating to BTRFS.
Appallingly stone-aged I know, but for the most part I'm all mdadm RAID10/6 with LVM on top of that and ext4 on top of that. btrfs doesn't yet have the RAID and volume management I need (as well as still having some quirky failure modes); checksummed data is nice to have but I wasn't willing to throw the baby out. I'm still running an array that was initially built with 120GB drives back in 2003.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Appallingly stone-aged I know, but for the most part I'm all mdadm RAID10/6 with LVM on top of that and ext4 on top of that. btrfs doesn't yet have the RAID and volume management I need (as well as still having some quirky failure modes); checksummed data is nice to have but I wasn't willing to throw the baby out. I'm still running an array that was initially built with 120GB drives back in 2003.
Did you use anything to safeguard against bitrot? I used redundant copies during that epoch, and now I've got to sort it all out after the fact.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Did you use anything to safeguard against bitrot?
Sort-of-not-really. All my rsync copies, since they're server-to-server, use the checksum option at both ends of the chain so backup scripts notify if it thinks file X file's hash had changed. Similarly I've got a cron that keeps MD5 hashes of all the files I consider "important" so I've got some idea if it's the original or one of the copies that think its changed. So far I've not seen any discrepancies that I could pin down to bit-rot so my back-of-a-fag-packet diagnosis is that it's not that big a problem for datasets as small as mine.

Been itching to try out ext4 metadata checksums for yonks but the option never really seemed to make it off the starting blocks and it's still not been rolled into e2fsprogs' main stream.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Badblocks finally finished its four passes of burn-in testing, so it was a good breakpoint for me to add two more 140mm Nanoxia fans on the case ceiling. There are now three pumping the two chimney's. I also added a 120mm fan to hover over the hot heatsinks by using the Zalman arm-brace cited above. I've buttoned the whole thing up and am now running the final prescribed burn-in test (smartctl -t long /dev/adaX on all nine drives in parallel), which in the past is what produced the highest drive temperatures. I should know within an hour or two what the equilibrium drive temperatures are. An additionial refinement, which I may or may not do, would be to block the grill on the rear of the case so as to force more airflow over the harddrives. Because those grills lack dust filters, I should probably do something about them regardless.

According to Tom's Hardware, the following airflow pattern is correct:


while the following two are incorrect:

and

Hence, aside from the open back grill, I'm starting to converge on the air flow pattern that Tom's Hardware recommends.

So far, one hour into the test, it does seem to be working: maximum drive temperature is 31C, albeit with all fans running at maximum RPM. Total wattage being consumed, including the 8 case fans and power supply, is 98 watts.
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Kind of odd not 1 drawing has the CPU COoler blowing the hot air out the back, WITH The fan out the back sucking the hot air out too.
IE: #1 drawing, but blowing it out the back of the case with the "flow" of the CPU Fan, in addition to a rear case-fan sucking it out.

The CPU HSF is conveniently rotated and then it says WRONG, well no kidding.
 
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Kind of odd not 1 drawing has the CPU COoler blowing the hot air out the back, WITH The fan out the back sucking the hot air out too.
IE: #1 drawing, but blowing it out the back of the case with the "flow" of the CPU Fan, in addition to a rear case-fan sucking it out.

The CPU HSF is conveniently rotated and then it says WRONG, well no kidding.
Good catch. I was looking at the intake and exhaust port locations and didn't even notice their CPU HSF was directional, probably because it's different than Intel's stock radial HSF that came with my E3.

After my last post I did seal the rear grill with blue-tape, as an experiment, and it has made no difference in drive temperature. However, it should reduce dust intake over time. I wonder what alternatives there are that don't look like such an obvious, ugly kludge.

Now that you've pointed out the correct meaning of the Tom's Hardware diagrams, I may test exhausting to the rear with the chimnies closed. On an Antec P180 case in a previous build, that seemed to be a quieter configuration. The fan noise becomes less audible. I'll first try dialing back the fan speed on the chimnies, though. Three fans at low RPM (the P180 had only one chimney fan) may prove quieter than one rear-exhaust fan of the same diameter at a higher RPM.
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I got a bunch of USB heat probs and want to write something for temp monitoring just to see what minor changes make over the course of a day of use / heat-soak not only in the area of the PC but the room in general. I think having the top open even with the blow out the back may be "better", but by how much I'm not sure.

I use that configuration I mentioned with a H60 Corsair AIO on the REAR blowing out of the case, with intake fans in the front and bottom on an Antec P182 :) and NOCTUA PWM fans. It runs the OC'd L5639 nicely, then again I never have taxed the system :)

Things I make sure to do when using a # of fans on a "tower" is:
- Cut-Out fan "guards" or shrouds that are "built in" from the case manufacturer if they are larger/obstruct more than the basic stainless guard.
-- Optionally open up 120mm to 140mm if there's room, more air less noise
- Filters. Use them, clean them monthly if you can.
- Fan Guards - thin, stainless steel... worth it if you have tons of drives you're messing with and want to keep your skin on.
- Never sit it on tall carpet. Put down some 3/4 plywood or you could even go with 2 pieces of tile or ceramic, dress it up with an office rectangular chair mat even ;)

For patching holes you can get some very thin Lexan and use a drill and mount some of it, if you go crazy get some 3m 5200 and make a gasket, and 'screw' the screws ;) It will hold in place, just not be as sexy. We use that stuff on Jet Skis to hold bilges in place and we're hitting waves at 30+mph constantly shaking it HARD. As with any DIY... PREP... PREP... PREP... and it should go smooth though.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I got a bunch of USB heat probs and want to write something for temp monitoring just to see what minor changes make over the course of a day of use / heat-soak not only in the area of the PC but the room in general. I think having the top open even with the blow out the back may be "better", but by how much I'm not sure.
Would you be using the usb temperature probe measurements to automatically adjust fan speeds, and if so, by what method? For a system where the hard drives idle most of the time, such an arrangement would offer a lot of practical noise reduction (at least most of the time), because the fans would get noisy only if they absolutely had to.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Would you be using the usb temperature probe measurements to automatically adjust fan speeds, and if so, by what method? For a system where the hard drives idle most of the time, such an arrangement would offer a lot of practical noise reduction (at least most of the time), because the fans would get noisy only if they absolutely had to.
You can do that already just get a 4 Prong PWM Fan-Splitter. Make sure your motherboard lets you control fan speed (heat ranges, percentages, etc...). I think even SuperMicro do to some degree, not as well as my X99 MSI though. Set the "If between x - x range fan speed 10%" then they're silent, and then you can do other tiers (Depending on motherboard) or just set your medium then high (or just high), but beware that's a good way to go from cool to hot a lot, and heat cycles aren't the best for any electronics. If it were me and that was my goal I'd get silent as fans as I could, then monitor them with the LV fan adapters, see if that's all you need... simple, and affective. If not AND heat is an issue I'd get the 3000rpm PWM Noctua fans (I have.) and do a simple fan control tune in BIOS and call it good once you get within range, you can always tweak for sound / temp later if it's within spec ;) but with 3k RPM you really shouldn't worry about over heating either which is a nice feature!!

Obviously you're limited to the probes/sensors used for PWM options, so I can see another option (like a fan controller) being something to check out.

I wasn't planning to use the results for any automated response but rather graphing and viewing the data visually to get an idea how heat builds over the day, and if it ever cools while I'm away/breaking.

I do have other plans for one of those "mini systems" that I can build, and then monitor a # ( 2-3 likely ) of probes that will work with a relay for the on/off exhaust vent fans for server room based on inside & outside temp differentials. I think my goal will be to do outside temp, server room temp, office temp and then write a couple super basic algorithms to open/close vents/fans based on temps via relays. I figure it may be nice to cool my office when the server room cools as long as I'm not blowing heat into the office instead of out the exhaust vents. I'm not sure how much the new raspberry pi can handle but it sounds like I can do all that with 1 device now, not 100% yet. I also have humidity sensors/relays, and temp sensor controls planned (all digital) for "setting" a temp too, as there will be a 13K BTU cooler in the room to keep it within spec, technically I could put a dehumidifier in there but I see 0 reason with our humidity on the west coast, and monitoring it closely over the last 5 years w/heat & without, etc...

Too bad the server room temp & humidity isn't conducive to cigars... could move my cigar collection from a large fridge into a real "walk in" humidor.... LOL Somehow I don't think the wife will let me use her side of the new 'storage', errr server room :) (Separated by doors of course, ;))
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
You can do that already just get a 4 Prong PWM Fan-Splitter. Make sure your motherboard lets you control fan speed (heat ranges, percentages, etc...). I think even SuperMicro do to some degree, not as well as my X99 MSI though. Set the "If between x - x range fan speed 10%" then they're silent, and then you can do other tiers (Depending on motherboard) or just set your medium then high (or just high), but beware that's a good way to go from cool to hot a lot, and heat cycles aren't the best for any electronics. If it were me and that was my goal I'd get silent as fans as I could, then monitor them with the LV fan adapters, see if that's all you need... simple, and affective. If not AND heat is an issue I'd get the 3000rpm PWM Noctua fans (I have.) and do a simple fan control tune in BIOS and call it good once you get within range, you can always tweak for sound / temp later if it's within spec ;) but with 3k RPM you really shouldn't worry about over heating either which is a nice feature!!

Obviously you're limited to the probes/sensors used for PWM options, so I can see another option (like a fan controller) being something to check out.

I wasn't planning to use the results for any automated response but rather graphing and viewing the data visually to get an idea how heat builds over the day, and if it ever cools while I'm away/breaking.

I do have other plans for one of those "mini systems" that I can build, and then monitor a # ( 2-3 likely ) of probes that will work with a relay for the on/off exhaust vent fans for server room based on inside & outside temp differentials. I think my goal will be to do outside temp, server room temp, office temp and then write a couple super basic algorithms to open/close vents/fans based on temps via relays. I figure it may be nice to cool my office when the server room cools as long as I'm not blowing heat into the office instead of out the exhaust vents. I'm not sure how much the new raspberry pi can handle but it sounds like I can do all that with 1 device now, not 100% yet. I also have humidity sensors/relays, and temp sensor controls planned (all digital) for "setting" a temp too, as there will be a 13K BTU cooler in the room to keep it within spec, technically I could put a dehumidifier in there but I see 0 reason with our humidity on the west coast, and monitoring it closely over the last 5 years w/heat & without, etc...

Too bad the server room temp & humidity isn't conducive to cigars... could move my cigar collection from a large fridge into a real "walk in" humidor.... LOL Somehow I don't think the wife will let me use her side of the new 'storage', errr server room :) (Separated by doors of course, ;))
My SuperMicro motherboard, an X10SL7, seems not very smart about controlling fan speeds. First and foremost, it cares about the CPU temperature, and it seems as though it will, at best, ramp all the fans up or down based on that one temperature. Maybe I'm wrong, but that's how it seems.

The Nanoxia comes with manual fan control: two sliders, each independently controlling a set of up to four fans. So, I'm presently setting it to handle the worst case, which right now seems to be while executing "smartctl -t long /dev/adaX" on all 9 drives in parallel.