Some thoughts on Enterprise vs. RED vs. consumer drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
Hello all,

I am not exactly looking to re-re-hash an issue that has been discussed before, but I guess maybe I am looking for some STH community experience to make me feel better. I was scolded recently by Supermicro support for considering using anything other than Enterprise drives in a large 24-bay chassis like the 846 series. When I pressed them on the issue, the main concern was the vibration issue if you had 24 of these suckers spinning away in there, and no they didn't think anything of the RED drives, it was RE or nothing.

So, I could assume that they are just "pulling the party line", but it was enough to give me pause.

I think a really good read on this issue is here: (to those who may not have seen this)
http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/
Especially things like the responses from NetApp and Intel, and the research done by Google.

I think my take-aways are as follows:
* I am still skeptical and nervous if it is really OK to put 24 consumer drives together in one big chassis
* The fact that consumer drives spin more slowly may mitigate the issue, somewhat
* I don't know if the build quality differences of the Supermicro vs. Norco chassis makes any difference
* I am wondering if you just strung a bunch of <8 drive JBOD enclosures would that address it, or would them being on the same shelf count?
* With consumer SATA drives, always use RAID6 or equivalent, and even consider having a hot spare
* I think the WD RED drives can help, but won't solve the issue, maybe over a certain number of drives (I think the are intended for 5)
* The TLER issue basically means no WD consumer drives unless you are using ZFS/ReFS for data integrity (so it doesn't *count* as RAID)
* Seems like Hitachi (or equivalent) is the way you have to go if you are going to use consumer
* I am certainly going to have some point-in-time backup solution, as everyone should

I am not completely convinced that the only difference between say RE, RED, and consumer drives from WD is all just firmware. Seems to me that they are being truthful about the RE drives containing additional sensors, purpose-built firmware, dual-core CPUs, and so on, read the Intel report on this from the link above.

That being said, I think it may be true that these differences may not make the drive last longer than consumer drives, just that performance is better and the way it responds to failures when they happen is better for enterprise use cases. I think for that matter, the RED drives are good because at least they have firmware that is RAID friendly for an otherwise consumer drive, and they consume less than half the power of an RE drive, that really adds up.

So, I would be interested in some opinions:
* Patrick, I would be interested in your current-day thinking, especially with all your experience with things like the big WHS

* 24-bay Norco and Supermicro owners - have you given this issue any thought? If vibration is an issue or not, how would you know? Has anyone seen any real-world issues with this, either in reduced performance, poor failure behavior, or reduced disk lifespan? I realize that a ton of people on this forum and elsewhere can chime in with long histories of "no problems" with even multiple units of 24-bay Norcos in the same rack fully loaded with consumer drives.


So, I have some thoughts:
* If I go with a 24-bay chassis, I could buy a couple sets of drives - say 8 expensive RE drives, 8 WD RED drives, and 8 Hitachi consumer drives, and try it all out for myself. Although this does not test vs. using them in smaller enclosures.
* I could put my "most important" data on the RE drives, and then Supermicro and the drive manufacturer can't have anything to say about it, and I only take the "risk", if there is one, on the other drives with replaceable data like disk ISOs
* If I simply setup multiple JBOD enclosures with WD RED drives, then this also should be within the recommended usage scenarios, it would just be a bit more expensive due to cabling and such. But this would also mitigate the noise issues, and possibly cost more power-wise, although I am thinking there would not be anything really stopping you from running power from one power supply to multiple JBOD enclosures.

In any case, I would really like to hear from people how they deal with it, either how you rationalized it or how you dealt with it using a real solution.

Thanks

-JCL
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
We typically see 3-5 failures/year with WD 2TB greens in a full 24-bay Norco chassis, measured over a handful of units over a 2-year period. It's a slightly higher rate drive-for-drive vs. the green drives we have put into consumer desktop systems, though with such a small sample size (<100 systems that we actively keep tabs on) you couldn't reasonably read anything significant into that.

There is definitely significant vibration in the chassis when it's full - I keep a little weight on top of the one we have here to stop the thin piece of trim at the top from vibrating, despite it being screwed down at three points. All the drives perform just fine, all storage systems are ZFS-based.

Keep in mind that RE drives may fail on day 1 just like any other - there's no guarantees. You could have hot/cold spares aplenty if you bought greens or reds vs. the cost of the RE drives, which is another factor - being able to plug a drive in to resilver an array quickly may save the array if another failure is imminent. RE drives generally run hotter, too, and consume more power. Having a very long warranty and enterprise-grade vibration reduction etc. etc. has got to be worth something, however...

It would be very interesting to have 3 sets of 8 drives - even just to be able to measure the performance and heat generation as well as looking at the longevity... again, though, hard to draw much from a small sample size.

We are slowly migrating our own drives across to Reds when the Greens die - as much for the longer warranty as anything else. I can easily believe that the amount of vibration would have -some- impact on the drives - it doesn't seem to be significant in our case, however, or at least not yet.

Edit: Also, we haven't had any dramas using Red drives in arrays larger than 5 - I'm not sure whether that's a number they've tested them to or whether it's just marketing to prevent them from cannibalizing too many RE sales. No impact on performance that we've measured.
 
Last edited:

josiahrulez

New Member
Dec 10, 2012
6
0
0
My server is left on 24/7 (for like the last 2-3 years), i have mostly Samsung, and Hitachi and a few Seagates.

The 1.5TB's say they have like 300-400 days uptime, and I haven't had any Samsung's fail, I own like 2 1.5TB Seagates (One has failed 3 times, once was bad sectors, twice the drive bricked itself), and i haven't had any issues with Hitachi or the newer 3TB Seagates (Although my friends have had issues with the new Seagates).

If you choose a reliable brand with a reasonable warranty time, you should be ok. with the WD Red & Greens, its all a marketing scheme they're the same drive with different firmware. You should be ok running consumer HDDs.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
My 2TB WD Green (original version) lasted 2.5 years in a PVR
SMART reported it was on for 1.2 years
I think what killed the drive was On/Off cycles, as we turn the PVR before bed

The drive luckily still worked well enough to get the data off, but in a RAID array it would need to be replaced.
If buying a drive then replacing it to restore the array could be sometime, and as in my case the original Green 2TB is no longer sold, would need to replace with newer drive, not sure how this would affect an array performance.

1.2 years x 365 days x 24 hours = 10,500 hours, I would think quite a bit below the specifications of the hours MTBF

Moral of story, consumer drives takes your risk with your array.
Is you Data worth the savings you made in your purchase ?
 
Last edited:

Mike

Member
May 29, 2012
482
16
18
EU
Are you thinking of getting an 846 with expander? If so, i had bad luck with intellipark and and the lsi expander found on most SM chassis.
 

jcl333

Active Member
May 28, 2011
253
74
28
We typically see 3-5 failures/year with WD 2TB greens in a full 24-bay Norco chassis, measured over a handful of units over a 2-year period. It's a slightly higher rate drive-for-drive vs. the green drives we have put into consumer desktop systems, though with such a small sample size (<100 systems that we actively keep tabs on) you couldn't reasonably read anything significant into that.

There is definitely significant vibration in the chassis when it's full - I keep a little weight on top of the one we have here to stop the thin piece of trim at the top from vibrating, despite it being screwed down at three points. All the drives perform just fine, all storage systems are ZFS-based.

Keep in mind that RE drives may fail on day 1 just like any other - there's no guarantees. You could have hot/cold spares aplenty if you bought greens or reds vs. the cost of the RE drives, which is another factor - being able to plug a drive in to resilver an array quickly may save the array if another failure is imminent. RE drives generally run hotter, too, and consume more power. Having a very long warranty and enterprise-grade vibration reduction etc. etc. has got to be worth something, however...

It would be very interesting to have 3 sets of 8 drives - even just to be able to measure the performance and heat generation as well as looking at the longevity... again, though, hard to draw much from a small sample size.

We are slowly migrating our own drives across to Reds when the Greens die - as much for the longer warranty as anything else. I can easily believe that the amount of vibration would have -some- impact on the drives - it doesn't seem to be significant in our case, however, or at least not yet.

Edit: Also, we haven't had any dramas using Red drives in arrays larger than 5 - I'm not sure whether that's a number they've tested them to or whether it's just marketing to prevent them from cannibalizing too many RE sales. No impact on performance that we've measured.
Thanks for the info, a couple of questions:
* Do you have all 24-drives in one massive array, or do you have them split into a couple of arrays?
* What parity are you using, Z2 probably?
* What did you go with for ZFS? Nexenta? Openindiana?

I understand your point that RE drives could fail day 1, your only guarantee would be the 5-year warranty ;-) And I agree, the cost is so high that you could buy multiple consumer/RED drives for the same price and just up your parity level, and the power consumption would be similar because the RE drives are more than twice the power.

But, you are saying the long warranty and vibration reduction ARE worth something you think? I think so, but we are talking more than double the price in most cases. I was thinking that if I DID do it that way, go ahead and spring for the 4TB hard disks, then at least I would need fewer of them, and for about the same power consumption.

I do plan to have at least two arrays, but mostly to experiment with ZFS until I am comfortable with it, that is why I am interested in your implementation. Is your setup for work or home?

So, you like the WD REDs? The RED drives are supposed to have some vibration protection, just not as much as the RE's.

What is the largest array of RED drives you have?

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
My server is left on 24/7 (for like the last 2-3 years), i have mostly Samsung, and Hitachi and a few Seagates.

The 1.5TB's say they have like 300-400 days uptime, and I haven't had any Samsung's fail, I own like 2 1.5TB Seagates (One has failed 3 times, once was bad sectors, twice the drive bricked itself), and i haven't had any issues with Hitachi or the newer 3TB Seagates (Although my friends have had issues with the new Seagates).

If you choose a reliable brand with a reasonable warranty time, you should be ok. with the WD Red & Greens, its all a marketing scheme they're the same drive with different firmware. You should be ok running consumer HDDs.
How many drives do you have in the same enclosure / array?
 

jcl333

Active Member
May 28, 2011
253
74
28
My 2TB WD Green (original version) lasted 2.5 years in a PVR
SMART reported it was on for 1.2 years
I think what killed the drive was On/Off cycles, as we turn the PVR before bed

The drive luckily still worked well enough to get the data off, but in a RAID array it would need to be replaced.
If buying a drive then replacing it to restore the array could be sometime, and as in my case the original Green 2TB is no longer sold, would need to replace with newer drive, not sure how this would affect an array performance.

1.2 years x 365 days x 24 hours = 10,500 hours, I would think quite a bit below the specifications of the hours MTBF

Moral of story, consumer drives takes your risk with your array.
Is you Data worth the savings you made in your purchase ?
Yes, I am concerned about this as well. All of my current drives are in my desktop and I do power cycle that daily, so that is one advantage right there.

I am curious, your SIG lists three RAID controllers and one HBA, but you only have 10 drives, how do you have your arrays setup?

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
Are you thinking of getting an 846 with expander? If so, i had bad luck with intellipark and and the lsi expander found on most SM chassis.
No, I don't think those expanders offer enough flexibility for my use case. I was going to either go with the TQ version (individual SATA connectors for each drive) or the direct-attach version, which has six 8087 connectors, one for each set of 4 drives. The if I use an expander it will be one of my choice and I can choose which drives to use with it as well.

But thanks for the warning.

-JCL
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Four pools of 6 drives each - we don't need more than about 2x gigabit performance and each array is more than capable of that, so there's no need to have more than one vdev per pool for performance reasons... and this way if one vdev does go down through three drives failing at once we don't have the entire array go under. While we have plenty of backups both on and offsite I don't particularly want to have to spend the time to replenish the entire box's worth of data, one zpool is inconvenient enough...

raidz2. drives are arranged north-south in the chassis and the three SAS2008 controllers each have two rows, so that if a controller goes the vdevs/pools stay online.

We started off with OpenIndiana but moved to Ubuntu+ZFS as everyone here is much more familiar with Ubuntu... Solaris/OI was giving me a headache troubleshooting odd networking issues and the like and things like setting multiple static IPs in Ubuntu is 60 seconds of work, whereas in OI... well, I never did figure that one out. OI+ZFS+napp-it is a bloody fantastic combination, though, and is well worth a look in even if only for Gea's excellent work... the GUI makes learning and managing ZFS dead simple and is a pretty big drawcard to OI.

Our own setup here is for both work and home - the business premises is attached to the house, so this server in particular stores both bulk business data and all of our movies/music/photos.

I'm pretty happy with replacing our Greens with Reds - they're power efficient, perform better and having more vibration reduction can't be a bad thing. The longer warranty in itself is enough to sway me towards them - it's much nicer paying $25 (cost of RMA from here) for a new drive rather than $100, and while the price of 2TB drives is going to drop over the next 3 years the calculations still look pretty good to me at this point in time.

We've been mixing the Reds in whenever a Green fails - I think the most we have in any one array is 2 or 3 at this stage. I wasn't particularly fussed on building an entire vdev out of them to start with in case they had teething problems not long after release which caused a number to fail, thus taking out my vdev...

Thanks for the info, a couple of questions:
* Do you have all 24-drives in one massive array, or do you have them split into a couple of arrays?
* What parity are you using, Z2 probably?
* What did you go with for ZFS? Nexenta? Openindiana?

I understand your point that RE drives could fail day 1, your only guarantee would be the 5-year warranty ;-) And I agree, the cost is so high that you could buy multiple consumer/RED drives for the same price and just up your parity level, and the power consumption would be similar because the RE drives are more than twice the power.

But, you are saying the long warranty and vibration reduction ARE worth something you think? I think so, but we are talking more than double the price in most cases. I was thinking that if I DID do it that way, go ahead and spring for the 4TB hard disks, then at least I would need fewer of them, and for about the same power consumption.

I do plan to have at least two arrays, but mostly to experiment with ZFS until I am comfortable with it, that is why I am interested in your implementation. Is your setup for work or home?

So, you like the WD REDs? The RED drives are supposed to have some vibration protection, just not as much as the RE's.

What is the largest array of RED drives you have?

-JCL
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
I have 4x controllers, currently

LSI9261-8i has 6x 2TB 7k2000's and 2x 60GB SSD's in RAID0 (CacheCade)
IBM M1015 in IR mode has 2x 60GB SSD's (Win8 boot), 320GB Spindle (win7 boot), 750GB spindle for backup

I have numerous other drives I chuck in for testing.

The IBM M5015 and M5016 I test various things with.
I should look at selling these.

I have just bought a 4TB Western Digital RE WD4000FYYZ Drive
It's the SAS version, splashed out on a replacement for the 2TB WDC Green, probably as good as it gets.
1.2Mil hours MTBF, 5 year warranty, but why have WDC reduced the MTBF for this drive.
Am looking forward to taking this puppy for a spin.
Not dealt with SCSI for some years.
I'll also revive my HTPC build where the drive will end up, will also need to get another M1015 or such like to run it.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I have experience with a lot of drives including ES.2's and RE3s. Bottom line, no drive brand is immune to failures. Enterprise drives have there place, but I don't see the need when you aren't dealing with Hardware RAID/expander or a client/boss. That said, I just ordered six 3TB SAS Ultrastars for a build I am doing. If it were my own system, I would go with a higher quantity of consumer drives and expect to replace them in two years.
 

josiahrulez

New Member
Dec 10, 2012
6
0
0
How many drives do you have in the same enclosure / array?
I have my drives in a Norco 4224, 12x Samsung 2TB, and 6x Seagate 3TB, 6x Hitachi 3TB,

and before i upgraded to 3TB's i had these in my norco (Now running in a second server) 7x Samsung 1.5TB, 2x Seagate 1.5TB (Ones died 3 times, and the other is riddled with bad sectors).

My servers currently in bits, but when i reassemble it i can give you some smart data on the HDDs (I'm pretty sure they're all healthy except the old 1.5TB Seagates.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Comparing WDC drive ranges.

SAS drives = ultimate in data retention/security as drive has ECC and adjustable sector size, highest throughput, TLER, anti vibration technology, highest quality parts $404 @ Amazon = $101/TB

RED drive = Low noise, low powered, high load/unload cycles $154 @ Amazon = $51/TB

BLACK drives = Low cost, low noise, low power use, $325 @ Amazon = $81.25/TB

GREEN drives = Cheapest $139 @ Amazon = $46/TB


SAS drives = Warrenty 5 years, 600,000 load unload cycles, <1 in 10^16 Non recoverable read errors in bit read, 182MB/s (4TB)
RED drives = Warrenty 3 years, 600,000 load/unload cycles, <1 in 10^14 Non recoverable read errors in bit read, 145MB/s (3TB)
BLACK drives = Warrenty 5 years, 300,000 load/unload cycles, <1 in 10^14 Non recoverable read errors in bit read, 154MB/s (4TB)
GREEN drives = Warrenty 2 years, 300,000 load/unload cycles, <1 in 10^14 Non recoverable read errors in bit read, 123MB/s (3TB)

Taking a look at the warranty periods is probably a good indication of where WDC thinks the drives will last to in their respective environments.
WDC can't afford to replace too many drives or it will go bust (probably will any way as the spindle dual-opoly are slow to adjust)

Get the drive that best meets your needs and budget, you get what you pay for in the end, I'll not be putting my precious data on 'Green' drives again.
BUT all drives will die at some point, the point could come at any time, some drives try to lessen the chance of that point coming sooner.

Looking forward to getting my SAS baby.
 
Last edited:

odditory

Moderator
Dec 23, 2010
383
66
28
Taking a look at the warranty periods is probably a good indication of where WDC thinks the drives will last to in their respective environments. WDC can't afford to replace too many drives or it will go bust (probably will any way as the spindle dual-opoly are slow to adjust)
That's mostly a myth :) I mean the idea I often hear that the warranty period placed on the drive is equal to how much faith vendor does or does not have in drive, and by extrapolation, how "reliable" a drive is expected to be by the mfgr.

They have been trending down for a while now, and the issue has more to do with finances than anything else. I.e. how sales are booked and how much they have to keep "in escrow" to cover future warranty claims.

I put near no faith in manufacturer claims about MTBF and error rates, specifically because its the manufacturer thats coming up with them. There is no industry standard for what constitutes a failure, what constitutes MTBF, how to arrive at an MTBF conclusion based on extrapolating short-term testing. Without independent authority or oversight testing these devices equally then these numbers are little more than marketing propaganda masquerading as meaningful and trustworthy "lab data".

That's not to say the numbers are totally devoid of merit, but only to take them with a grain of salt. Take the example of vehicle crash test data and safety ratings. I think most reasonable people would consider it laughable if this was left up to auto manufacturers to be free to come up with their own, rather than an independent oversight arm like the NHTSA. And yet thats pretty much what HDD mfgrs. are free to do.

And its not coincidence that they all seem to arrive at these big, neat, round numbers and make a blanket claim about an entire line of drives even though there are drastic differences within the line when it comes to rotational speed, aerial density, sector size, firmware, plant of manufacture, etc. etc. The proof is in the big rounding: that they don't truly know. Which leaves only guessing and extrapolation: by engineers and product managers and then filtered through the marketing dept.
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
The world revolves around marketing.

In the end we need to make the best of the information we have at hand.
An independent study would take years to find failure rate etc, the drive will be obsolete by the time it was completed :D
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
This is why you stick to 3.5" SAS - they are 2.5" SAS drives with BEEF to keep the naughty stuff away. seriously. 2.5" SAS 15K is super tempermental only worst is nearline.

I didn't realize until I plugged up the lefthand to a P420 that I was getting 4-6 SSD in raid-10 performance from 8 15K SAS DP 3.5" 450gb minus the latency (but plus the reliability of a drive that will go for 7-10 years solid).