BestBuy - WD - Easystore 14TB External USB 3.0 Hard Drive - $190

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

josh

Active Member
Oct 21, 2013
615
190
43
Anyone shucked these yet? If it's anything like the new 12TBs the drives are lower quality
 

EasyRhino

Well-Known Member
Aug 6, 2019
499
370
63
you mean like air instead of helium?

personally I would like it if WD could figure out a way if they were going to limit a drive to '5400 class' performance to actually limit the Rpm's to 5400 to save on heat and electricity.

anyway, the reddit datahoarder community has lost their minds so we'll probably start to see shucking feedback soon.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I just ordered 6 of those from Bestbuy, with the intent of replacing 6 WD10EMAZ (10TB). Will pick them up later today.
If these are really spinning at 7200rpm, then I may not be able to tolerate the noise. Sigh. I thought they were actually 5400.
Does anyone of comparable high capacity shuckable drives that are actually 5400 (or 5900) rpm ?
 

thedman07

New Member
Sep 14, 2020
24
23
3
FYI, limited to 1 per customer.
I got 8 across 3 orders this morning. I just saw that they've started limiting them to 1 per order. The most recent shucker said it was a WD140EMFZ, which is supposed to be 5400rpm 512mb cache.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
If you use paypal checkout rather than bestbuy login, the limit doesn't seem to be enforced.
I think if you just use another email, it won't be either. I just ordered 3 more. Have to drive a few hours to get those extra though as local store is out.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Well, they did cancel one of my 3 orders. I picked up a total of 8 from 3 different stores.

I think the max might now be one per store. Which is moot since there is no longer one to be had within 250 miles, and BB won't allow any more orders to be placed even for shipping.

I just started running h2testw on all 8 drives, still in their USB enclosures. Looks like this will take over 48 hours to do one full write pass and verify pass. I will only shuck them after they pass verification. Then I'll know what kind of drives are inside ... Surprise. They don't seem exceedingly loud so far, at least not during sequential tests.

After that, I'll have to figure out how to migrate the data from the old array which is 6 x 10TB ZFS raidz2, onto 8 x 14TB ZFS raidz2.
The thing is, I don't have 14 SATA hotswap bays to plug them in all at once. I do have enough SATA ports to connect all drives.
I can fit 9 drives the way the case is currently setup, 10 if I move one hotswap bay from another case.
Perhaps I can fit 4 drives outside the case on the floor temporarily ... But my cats might interfere with the idea.
Or I can remove 2 parity drives from the existing array, and create the new array with 2 parity drives missing. Then I can get away with transferring all the data with just 10 drives hooked up. Then one big resilver when adding the "missing" parity disks. Fun weekend project ahead to be sure.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I thought I had enough on one machine, but somehow it was a problem. On my home theater PC I have a Prime X470 Pro motherboard. I hooked up 6 drives to the rear ports. I have a USB hub hooked up to the internal motherboard USB port. I hooked up the last 2 drives to it. But they kept disconnecting somehow. I updated drivers and the BIOS, but it was no help. I did not try to open the machine to fix it. Instead I hooked up the last 2 drives to a desktop in my home office. They have been chugging along. Currently h2testw is at 9hr46mins with 20 hours left to go estimated at the current 124MB/s. The write speed started around 200MB/s but dropping steadily over time. Looks like this will be well over 48 hours for a full write and read pass.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I thought I had enough on one machine, but somehow it was a problem. On my home theater PC I have a Prime X470 Pro motherboard. I hooked up 6 drives to the rear ports. I have a USB hub hooked up to the internal motherboard USB port. I hooked up the last 2 drives to it. But they kept disconnecting somehow. I updated drivers and the BIOS, but it was no help. I did not try to open the machine to fix it. Instead I hooked up the last 2 drives to a desktop in my home office. They have been chugging along. Currently h2testw is at 9hr46mins with 20 hours left to go estimated at the current 124MB/s. The write speed started around 200MB/s but dropping steadily over time. Looks like this will be well over 48 hours for a full write and read pass.
Looks like the 6 drives on dedicated ports on the HTPC are at 187 MB/s and only showing 9 hours left to go after about 10 hours. So the write pass may complete in less than 24 hours after all.

The 2 that dropped to 125MB/s were on a USB hub. There was a backup that was going on on a 3rd drive on that same hub. When the backup ended, the throughput on the easystores did not pick back up. I interrupted h2testw on those drives and moved one to a dedicated port. I'm now doing the verification pass for the data already written which is predicted to take only 6 hours at 203 and 209MB/s respectively currently. Then I'll resume the h2testw write/read pass in another subdirectory to verify the rest of the surface. Guess i may have all 8 drives verified by sunday still.
 

josh

Active Member
Oct 21, 2013
615
190
43
Well, they did cancel one of my 3 orders. I picked up a total of 8 from 3 different stores.

I think the max might now be one per store. Which is moot since there is no longer one to be had within 250 miles, and BB won't allow any more orders to be placed even for shipping.

I just started running h2testw on all 8 drives, still in their USB enclosures. Looks like this will take over 48 hours to do one full write pass and verify pass. I will only shuck them after they pass verification. Then I'll know what kind of drives are inside ... Surprise. They don't seem exceedingly loud so far, at least not during sequential tests.

After that, I'll have to figure out how to migrate the data from the old array which is 6 x 10TB ZFS raidz2, onto 8 x 14TB ZFS raidz2.
The thing is, I don't have 14 SATA hotswap bays to plug them in all at once. I do have enough SATA ports to connect all drives.
I can fit 9 drives the way the case is currently setup, 10 if I move one hotswap bay from another case.
Perhaps I can fit 4 drives outside the case on the floor temporarily ... But my cats might interfere with the idea.
Or I can remove 2 parity drives from the existing array, and create the new array with 2 parity drives missing. Then I can get away with transferring all the data with just 10 drives hooked up. Then one big resilver when adding the "missing" parity disks. Fun weekend project ahead to be sure.
8 drive Z2 will lose you some overhead I believe. I think 4, 6, 10 is the accepted number.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
8 drive Z2 will lose you some overhead I believe. I think 4, 6, 10 is the accepted number.
What kind of overhead do you mean ? Are you saying an 8 drive RAIDZ2 will be slower than 6 drive RAIDZ2 ?
 

josh

Active Member
Oct 21, 2013
615
190
43
8 drive Z2 will lose you some overhead I believe. I think 4, 6, 10 is the accepted number.
Something about the stripe width and parity although not sure if still relevant these days
 

TXAG26

Active Member
Aug 2, 2016
397
120
43
With compression enabled (LZ4), it's not really an issue these days. There is no speed reduction with compression, on the contrary, it actually allows data to be put onto and pulled off the spinning drives FASTER than in its uncompressed state with ZFS.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
It’s not about compression or parity specifically, but how there are optimal numbers of drives for ZFS to make things most efficient from a disk space usage perspective. In the end it usually doesn’t seem worth worrying about, but apparently is measurable.
 
Last edited:

TXAG26

Active Member
Aug 2, 2016
397
120
43
Yes, splitting hairs, certain disk numbers behave better and use space a little more efficiently, but with compression enabled, the optimal block-size allocation for the files that are added to the array gets thrown out the window due to the reduced file sizes. There are online calculators that will parse the nuances of number of disks to space, but most configurations are within 5-10% of each other. More importantly, if you have an 8-bay hot-swap setup, or some other hardware limitation, that should be given more consideration, along with the total space one is needing, IMHO.