Is Western Digital Having Issues at 4TB?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Was doing some research into 3TB or 4TB drives for the Dell C6100

Seagate has a $190 4TB model out:
Amazon.com: Seagate Desktop HDD 4 TB SATA 6Gb/s NCQ 64MB Cache 3.5-Inch Internal Bare Drive ST4000DM000: Computers & Accessories

Hitachi has 4TB coolspin drives for a few dollars more:
Amazon.com: HGST Deskstar 3.5-Inch 4TB CoolSpin SATA III 6Gbps Internal Hard Drive Kit 32 MB Cache 3.5 Internal Bare or OEM Drives (0S03359) [Amazon Frustration-Free Packaging]: Computers & Accessories
Amazon.com: HGST Deskstar 3.5 Inch 4TB CoolSpin SATA III 6Gbps Internal Hard Drive Kit (0S03364) [Amazon Frustration-Free Packaging]: Computers & Accessories

Western Digital? 4TB black or RE drives. The black 4TB drives are less expensive but getting close to 2x the Seagate drives:
Amazon.com: WD Black 4 TB Desktop Hard Drive: 3.5 Inch, 7200 RPM, SATA III, 64 MB Cache, 5 Year Warranty - WD4001FAEX: Computers & Accessories

Checked newegg and amazon is a bit less expensive.

Also raising suspicion, we haven't seen 4TB red drives yet. Wonder if WD is having issues producing 4TB drives and that's why we haven't seen more of them.

looking more like 4TB Seagate v. 3TB Red. 3TB may be lower power consumption but can add 1/3 more storage to the C6100 with 4TB and using fewer drives for same capacity = lower power consumption.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I would skip Seagate desktop line and do either WD red or Seagate cloud drives. I have had some problems with the Seagate desktops at 3TB so have stopped using them and switched to red drives. Though you may be more dictated by capacity needs then can wait for the 4TBs...


Was doing some research into 3TB or 4TB drives for the Dell C6100

Seagate has a $190 4TB model out:
Amazon.com: Seagate Desktop HDD 4 TB SATA 6Gb/s NCQ 64MB Cache 3.5-Inch Internal Bare Drive ST4000DM000: Computers & Accessories

Hitachi has 4TB coolspin drives for a few dollars more:
Amazon.com: HGST Deskstar 3.5-Inch 4TB CoolSpin SATA III 6Gbps Internal Hard Drive Kit 32 MB Cache 3.5 Internal Bare or OEM Drives (0S03359) [Amazon Frustration-Free Packaging]: Computers & Accessories
Amazon.com: HGST Deskstar 3.5 Inch 4TB CoolSpin SATA III 6Gbps Internal Hard Drive Kit (0S03364) [Amazon Frustration-Free Packaging]: Computers & Accessories

Western Digital? 4TB black or RE drives. The black 4TB drives are less expensive but getting close to 2x the Seagate drives:
Amazon.com: WD Black 4 TB Desktop Hard Drive: 3.5 Inch, 7200 RPM, SATA III, 64 MB Cache, 5 Year Warranty - WD4001FAEX: Computers & Accessories

Checked newegg and amazon is a bit less expensive.

Also raising suspicion, we haven't seen 4TB red drives yet. Wonder if WD is having issues producing 4TB drives and that's why we haven't seen more of them.

looking more like 4TB Seagate v. 3TB Red. 3TB may be lower power consumption but can add 1/3 more storage to the C6100 with 4TB and using fewer drives for same capacity = lower power consumption.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I'd seriously suggest RE4 drives, they are solid, and if you can spare 10% more - get the SAS version. Trust me here. Sas FTW.

Seagate battles back with the Constellation ES.3 with SATA, SAS, and with FIPS140_2 for both (enhanced over SED I guess).

I'm sorry , but due to the BER of consumer drives 1^14 (1 error per TB) versus the server-oriented 1^15 (1 error per 1024 TB is that right? or more) - you would have 4 errors at any given time on a 4TB drive.

This is why HP P420 supports RAID-1 ADM - three drives form 1 - You can lose 2 drives and keep on rocking. They know 8TB drives will be out sooner than later, and with enterprise 10K sas drives limited to 600GB 15K and 1.2TB 10K - you are putting huge bets on RAID nearline.

3 Drives Raid-1 is from ITANIUM ras - you also gain from reading from 3 drives which at 7200 * 3 could be some pretty great interleaved speed, and use your write back cache to smooth out the 3 writes, but in theory - I wonder if 1 write succeeded, if that would be good enough.

At the end of the day, depending on your needs, reliability that is important.

My opinion - use SSD acceleration like P420 smartcache (included with SAAP 2.0) or Cachecade or MaxCache - it will help immensely with the rebuild process.

Especially with RAID-5/6. The amount of stress that is placed on a driveset degraded is tremendous. I had jobs that took 6 hours turn into 36 hours jobs with a failure (P400, which bypasses the cache during XOR mode oddly, now can be persuaded to cache with SAAP).

So if you look at this this way, you drives might have been pretty cool, medium load, but now a drive goes Poof! and all the drives are BALLS to the walls 100% working the controller like mad, the you put a new drive in, and it has even more work to do. and a Decision to prioritize rebuild time.

4TB? Do you want it to take 1, 2 , 3 , or 4 days to rebuild? During this stressful time, there will be more work (heat/movement) and even less time for error. This is the perfect storm for another drive to throw an error and raid-over.

Some newer controllers which I might not like right now, do have some cool features which i've been discovering where they consider more than drive good/bad, they may consider that if the drive is bad, and the sector is good, and the good drives have a bad sector, don't drop the entire raid! use what you have an keep rebuilding. This is very non-traditional. It is also why the LSI controllers can save a raid-0 with a spare where most controllers will Error out and say raid-0 dead, good night! where as the LSI seems to be able to try to move the questionable sectors off to a spare in priority before it all goes down. Pretty slick!

The other guy (hp) went for the simpler and costlier approach of 3-drive RAID-1 - pretty solid smarts but not cheap.

SAS increases the ECC, doesn't have silly LBA limits, multi-targeted dual ported (6gbps * 2 , up to connections per drive), IOEDC for the improved IOECC (more bits recovery), and if it can't, it can tell the controller its screwed. Sata is not fully IOEDC and usually not fully IOECC.

ZFS, LSI protected, and Encryptions are way to ensure you are not getting bitrot/ecc multibit IOEDC types of protection.

And yes, i've had a SATA drive push bad data right on through a traditional LSI raid controller 1068/1078 - why? Because if the drive says the data is good, why question it? oops. 10 drive NTFS sata raid-5/6 total annihilation due to corruption. fun.

I understand saving money, and REFS, ZFS, and LSI protection or encryption can give similar results.

I myself just rock JBOD at home, because all this raid crap is a PITA for serving the home, but a necessity for work machines.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
mrkrad: How old are your oldest RE and ES drives? After about 3 years, we are starting to replace the 2TB ES SAS drives at a rate of 1-2 per week out of our ES pool (~250 drives) due to failure. On my RE SATA arays, one of my peers in another group had to swap out a whole bunch of those as well. I had 5 of my 8x RE3s die off during a RAID init at home. It seems that the WD green drive generation had a bit more failure than normal.

We basically run SAS on the performance systems and on new bulk storage, are using the WD Red drives with a stack of ES/RE drives on hand in case we start to have port communications problems. Sure are hoping these are good enough...

Lastly, does the cachecade help in rebuild times? I hadn't thought of that piece, we don't have cachecade on most of our cards. Do rebuilds take 4 days for your setups? I think the longest rebuild that we have had is in the order of 20 hours on 12 disk arrays with RAID6. I'd have to check again but could pull logs to see.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
RE3 were dark times, so yeah i've got a stack of 500gb that died pretty horribly in a short time. They lasted 3 or so years then 1 died, so we pulled them all and on one side (?? vibrate ??) the drives were all near critical with many P400 remapped blocks (remaps blocks above and beyond the drive itself) with drives reaching critical SMART. left were perfect, middle were so-so, right side were near dead. Go figure.

The same box filled with 146gb 15K SAS 3.5" were all perfect with at max 1 remap lol.

RE4's and constellation ES.2 were far better drives. Post Maxtor crap. so yeah I know what you are talking about man. very much so.

I've had zero RE4 2TB failures nor Hitachi 2TB. Raid-5 causes more work for the drives, so I tend to avoid that. If you think about head movement from double/read write the same that kills ssd's faster will some some physical work.

By far the easiest way to kill constellation/RE drives is to use power savings. they are designed (the 7200rpm) for server use. Constant thermals (few cooldowns) and few head unloads/loads.
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
I don't trust > 2TB on any drive, I just think they are trying to squeeze too much out of platters with a terabyte race and physics is coming in to play.
 

johkeeng

New Member
Dec 16, 2013
1
0
0
@ mrkrad, Which of these 4 TB SAS drives would you choose (i.e. is NRE rate more important than MTBF)?
WD at NRE 10^16, MTBF 1400
HGST at NRE 10^15, MTBF 2000