Best Buy WD 10TB w Flash Drive $169.99, now $159.99

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
So, I ordered 5 of this online for in-store pickup, since that was the limit.
If I physically go to another store that has some, is that how you get past it?

-JCL
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
So, I ordered 5 of this online for in-store pickup, since that was the limit.
If I physically go to another store that has some, is that how you get past it?

-JCL
Yes, I'd either (a) go directly to the store w/o ordering online (risk = stock depletion), or (b) use a spouse's BB account to purchase online (or similar) for physical pickup (noted risk = mitigated). Regarding the former, I cite that risk as ATL is sold out except for stores 1 hour 15 minutes away ... not sure what the case is in your locale.

I've heard that store managers have cancelled orders in the past where they have seen orders associated with an account for pickups from multiple stores (where the unique identifier would be your rewards # I assume).

Make sure you pay with your AMEX to get +1 year extended warranty. I hear multiple flash drive arrays like I'm building can wear out over time. ;)

Good luck.
 

mimino

Active Member
Nov 2, 2018
189
70
28
What do you people do with all this storage, 4k blue ray rips? I really don't see where else I could possibly use 50T of space...
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
What do you people do with all this storage, 4k blue ray rips? I really don't see where else I could possibly use 50T of space...
Bear in mind that once you account for parity and free space, that 50 TB drops quite quickly. Using ZFS as an example (and 5 drives is a bit of an odd # to look at ... if I was thinking about deploying 5 drives, I'd either run 1 less as 2 mirrored pairs, or 1 more as 6 drives in raidz2):

  • raidz = single parity = 39 TB ZFS usable (S/E = 77%) / 31 TB practical usable (20% free space allowance)
  • raidz2 = double parity = 29 TB ZFS usable (S/E = 57%) / 23 TB practical usable (20% free space allowance)
  • Usually your storage efficiency for raidz1 (typically not advised) and raidz2 are higher due to more than 5 drives in a pool.
But more to the point, much capacity = many "Linux ISOs" (you may have to google that to get the joke). ;)
 
Last edited:

Craash

Active Member
Apr 7, 2017
160
27
28
I have a unique outlook. I actually have two machines, with about 50TB each that mirror all content on a nightly basis. Each machines array is in a RAID 0 stripe - I know, the HORROR. But, let me explain. By having two full copies of my data in a fast stripe I don't worry about a failure - drive based or machine based. If a single drive fails in one of the arrays, I map my service providing VM's to the other array, replace the failed drive, recreate the array, and start te sync over again (10Gbe Network). This turns out to be faster than replacing a failed RAID6 or ZFS drive and I don't have the degraded performance while it rebuilds - which will take some time with this size of drives. I wrote a bash script that changes the mappings on the 'Nix boxes so it's pretty seamless.

I will say that most of this is also moved offsite via sneakernet and my most CRITICAL stuff like the Royals World Series and our Home Videos are also compressed with recovery records, encrypted, and stored in the cloud. Needless to say, this requires a LOT of space.
 

Craash

Active Member
Apr 7, 2017
160
27
28
Bear in mind that once you account for parity and free space, that 50 TB drops quite quickly. Using ZFS as an example (and 5 drives is a bit of an odd # to look at ... if I was thinking about deploying 5 drives, I'd either run 1 less as 2 mirrored pairs, or 1 more as 6 drives in raidz2):

  • raidz1 = single parity = 39 TB ZFS usable (S/E = 77%) / 31 TB practical usable (20% free space allowance)
  • raidz2 = double parity = 29 TB ZFS usable (S/E = 57%) / 23 TB practical usable (20% free space allowance)
  • Usually your storage efficiency for raidz1 (typically not advised) and raidz2 are higher due to more than 5 drives in a pool.
But more to the point, much capacity = many "Linux ISOs" (you may have to google that to get the joke). ;)
Of course with drives this size and UREs (uncorrectable read error) rate of 10^14, (, RAID 5 or RAIDZ1 or any other single parity offering is not much better than no redundancy. RAID10 is the best best, and in that scenario my setup is as efficient and also takes into mind hardware failure outside of just the drives plus the enhanced performance.

Is it something I would use in an enterprise setting - no.

Is it for everyone - no.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Of course with drives this size and UREs (uncorrectable read error) rate of 10^14, (, RAID 5 or RAIDZ1 or any other single parity offering is not much better than no redundancy.
  • For the record, I don't disagree with your strategy, but I would be a bit stressed during those 12.5 hours (theoretical maximum-minimum resync period) = 50TB @ 1,250 MB/s.
  • "Directionally" my approach isn't too much different (and looping in your current point) in that I run my 12 10TB Easystores in RaidZ 3x4x10.0 TB, but we differ in that I do have parity on the second ZFS instance.
  • I do think "not much better" is a bit of an overstatement though - throwing some numbers at it - mean time to data loss for Raid 0 = 5.71 years and Raid 5 = 1,484 years. Some of those numbers look off no? ;) And o/c rebuild speed is dependant on a number of factors: system load, pool fill, etc. so I pulled a # out of my arse as this could be debated all day long.
https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/ =>


  • In my case, I'm 100% backed up to the cloud (encrypted of course), but I don't want to even think about pulling 68.2 TiB down at 100 Mbps (damn condo HOA ISP contract) as that would take more than 2 months.
  • My point is that adhering to a true 3-2-1 backup strategy means you could lose both pools and still restore 100% from offsite, or I could at least, and you to the extent you care (who are the Royals? jk). I replicate hourly to the second instance and every 4 hours to the cloud, so only a few "Linux ISOs" would be lost. :) They aren't really Linux ISOs (as I'm sure you know).
Is it something I would use in an enterprise setting - no.
  • Captain Obvious in da house! :cool:
Is it for everyone - no.
  • Also agreed, and I do absolutely see the rational. While you would be absolutely crucified on a certain forum dedicated to a certain OS running ZFS, I say to each their own. I strongly contemplated using a pool with a lesser fault tolerance than raid z2 on instance 2, but in the end recall my epic bad luck and decided I best not.
 

Craash

Active Member
Apr 7, 2017
160
27
28
I love these conversations. :) The only issue that I see with RAID6 and ZFS2, is again UREs. That has to be a huge concern for drives this large and the only true way around it is RAID10. ZFS, although better than most other file systems does not negate the danger at all. Keep in mind, it is NOT about the failure of drives, it's about uncorrectable read errors on the rebuild after the failure.

Normal SATA drives (like these ES10TBs) are URE 10^14. With 10^14 drives, you hit a URE roughly every ~12TB of reads. In other words, that means that you expect a RAID 5 resilver to fail if your usable array size is 6TB or larger, pretty small by today's standards. Although dual parity improves this, it is going the way of the dodo (and single parity) as drive sizes increase. RAID10 does handle this much better, but at a cost of 50% of raw sapce, and long rebuilt times,. I am still happier with my setup - for my needs. Plus, it brings additional hardware fault tolerance. In a case of a failed drive (or powersupply, or anything), I initiate my script, and am back up in under two minutes. They I can concentrate on my failed array.

I'd be MUCH more nervous re-createing a 50 (well, mins 2 drives) array with RAID6 or RAIDZ2. Just a few URE's and the rebuild will likley fail. Not to mention, how many DAYS do you think a 50TB array would take to bring back to optimal. Much longer than 12-15 hours. More likely 5 days or more.

I did FreeNAS for many years. And, it worked well for me. I just got tired of the slowness of rebuilds and the updates that broke things (like when they broke all usernames passwords in the database). Not to mention the half-ass support of SMB/CIFS. I will say that my new server 2019 runs circles around my FreeNAS with MUCH less memory. And CyberJockey is an arse.

Disclaimer: This works fine for a homelab, and all my critical data is backed up offsite. I wouldn't put this configuration in an enterprise setting. But the speed and easy of configuration changes is wonderful.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
I love these conversations. :) The only issue that I see with RAID6 and ZFS2, is again UREs. That has to be a huge concern for drives this large and the only true way around it is RAID10. ZFS, although better than most other file systems does not negate the danger at all. Keep in mind, it is NOT about the failure of drives, it's about uncorrectable read errors on the rebuild after the failure.
  • Likewise on the convo!
Normal SATA drives (like these ES10TBs) are URE 10^14. With 10^14 drives, you hit a URE roughly every ~12TB of reads.
  • Allow me to be an anal arse and add some precision here.
    1. Its 12.5 TB to be precise ;) OK Joking on that ... although it is.
    2. First to the spec =>
      1. It is "<1 in 10^14", but the drive doesn't throw its hands up and say - "Whoa, we just read 12.5 trillion bytes = URE time"
      2. As a disclaimer, I don't have an IT background, but I would posit that manufacturers probably rate non-recoverable errors in a conservative fashion.
      3. "<1 in 10^14" could be darn close to "<1 in 10^15" or maybe even achieve it, but adding that extra "1" doesn't have immaterial implications, so I'm inclined to believe significant caution is exercised here.
      4. Ever heard of confirmation bias (the above = probably a good example). ;)
    3. Second, piggybacking on the last point (just to present a spectrum) you don't encounter a URE "roughly" 12.5 TB. The metric is a statistical average.
      1. Your drive could be on the wrong side of the bell curve and hit one @ 10^13, or 1.25 TB (scary, eh).
      2. Or your drive could be on the right side of the bell curve and hit one @ 10^15, or 125 TB (exceeds my raw pool capacity).
    4. Third, I believe you can encounter a URE on a 0 or 1, with RAID rebuilding both.
      1. So that single byte of 12.5 trillion has to be on a sector with data, otherwise it is of no "real consequence," right?
      2. Provided that is correct, quite different implications for an array @ 10% fill v. 100% fill.
  • I concede of course that your point is well founded, I just like to play the devil's advocate from time to time.
In other words, that means that you expect a RAID 5 resilver to fail if your usable array size is 6TB or larger, pretty small by today's standards. Although dual parity improves this, it is going the way of the dodo (and single parity) as drive sizes increase. RAID10 does handle this much better, but at a cost of 50% of raw sapce, and long rebuilt times,. I am still happier with my setup - for my needs. Plus, it brings additional hardware fault tolerance. In a case of a failed drive (or powersupply, or anything), I initiate my script, and am back up in under two minutes. They I can concentrate on my failed array.
  • Does the script still work when your awesome Easystore 32 GB Flash Drive Raid Array catches fire and takes out the server room? With great power comes great responsibility and you have to be really careful with such fast RAID arrays. Please ensure your thermal solution is up to par. OK - my bad humor is fading.
  • And just to give you some design ideas, here is another version.
    • The first looked to just be striped ...
    • ... here it looks like we have a veritable unicorn ... striped and mirrored!


I'd be MUCH more nervous re-createing a 50 (well, mins 2 drives) array with RAID6 or RAIDZ2. Just a few URE's and the rebuild will likley fail. Not to mention, how many DAYS do you think a 50TB array would take to bring back to optimal. Much longer than 12-15 hours. More likely 5 days or more.
  • The precise answer to how long = it depends. But lets exaggerate your point ... to 120 TB / 12 HDDs ...
    • Do you have 4 3 disk zvols like myself? (rhetorical, I know you don't)
    • Or a single 12 disk zvol?
    • The spectrum is quite large there however, your point is noted.
  • I'm curious, but too lazy to do the math (and there are too many variables), but what is faster
    • Replicating 50 TB across the wire to bring the failed array back up?
    • Or a rebuild of a 3 disk raidz vdev?
    • More importantly - who is going to be able to shuck these bad boys faster - I was shuckin with the best of 'em by #12 last time. Not a single broken retaining clip either. ;)
  • Another point, while you must replicate to get both back up, I have two options:
    • Maybe I feel like doing just the same, at the expense of a significant chuck of BOTH systems resources.
    • Or maybe, I'm in a rebuild type mood.
  • And just to clarify, a "failed" rebuild doesn't necessarily mean you just lost your array.
    • It should mean an abort on the resilver, and yes, that means starting from scratch, but all is not lost.
  • My ultimate point => pros and cons ... pros and cons ... let me reiterate, I think your architecture is certainly creative (and I admire that) + awesome and while I considered no parity on FreeNAS-02 (hostname, lack of creativity here you see), I wouldn't have ever thought to forgo it on my primary.
    • I thought RaidZ 3x4x10.0 TB was a bit "unorthodox" ...
    • ... and you are putting bad ideas in my head for the new array. ;)
I did FreeNAS for many years. And, it worked well for me. I just got tired of the slowness of rebuilds and the updates that broke things (like when they broke all usernames passwords in the database). Not to mention the half-ass support of SMB/CIFS. I will say that my new server 2019 runs circles around my FreeNAS with MUCH less memory. And CyberJockey is an arse.
  • OK I literally ROTFL on the last bit. Amen.
  • Curious what you are running. I'll show you mine if you show me yours LOL ;)
    • And how could you leave out Corral? hehe
    • I'm over it too, but part of the reason I went to 2 hosts instead of 1 was to achieve stability, which I've finally achieved.
    • I know I need to move, but have been putting it off (and I find that as I grow older I'm becoming more risk averse).
      • I'll spare you the story of how the "nondestructive" action of adding a SLOG borked a pool ... TWICE ...
Disclaimer: This works fine for a homelab, and all my critical data is backed up offsite. I wouldn't put this configuration in an enterprise setting. But the speed and easy of configuration changes is wonderful.
  • As I said, you won't get a dissenting opinion here.
Appreciate the convo - have a good one!
 

jcl333

Active Member
May 28, 2011
253
74
28
I successfully snagged 7 of these babies, I actually paid for them on-site and used ship-to-store. Also went to a state with no sales tax ;-)

My use case is I am building a storage spaces server on Server 2019, and I want to use a mirror accelerated parity configuration with dual-drive failure capability, and you need 7 devices to do that. I will pair this with two mirrored SSDs that have high endurance and power-loss protection, actually still looking at where I am going to source those now.

Not sure how much space this will ultimately yield, because I plan to have multiple volumes, the one using dual-drive parity will be archive, like photos and videos that never change. Then another using mirrored with 2-drive failure tolerance, only requiring 5 drives to do that.

Going to back this all up to an older QNAP 10-bay RAID6 that I am upgrading to 10Gbit Ethernet and faster CPU/RAM. It currently has 9x 3TB REDs, which gives me more than enough space to backup the active data that will be on this array. And finally the really important stuff will also be in the cloud, but I have not finished researching that part yet. Trying to keep up with you guys! Good stuff though, and really interesting.

-JCL
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Very nice ... Very slick on the no sales tax ... ;)

I'm going to have to wait until Wednesday for all 12 of my flash drives to arrive before construction of the Flash Drive Raid Array begins :(

It is a tad anticlimatic when you get them in your hands as it literally takes a week to run the 4 badblocks patterns, book-ended by SMART tests ... (I'd advise a proper burn in even though they are brand new)

In the event you need a reference point, Serve The Home's own @BLinux has an excellent video on how to shuck here:
 
  • Like
Reactions: madLyfe

jcl333

Active Member
May 28, 2011
253
74
28
Very nice ... Very slick on the no sales tax ... ;)

I'm going to have to wait until Wednesday for all 12 of my flash drives to arrive before construction of the Flash Drive Raid Array begins :(

It is a tad anticlimatic when you get them in your hands as it literally takes a week to run the 4 badblocks patterns, book-ended by SMART tests ... (I'd advise a proper burn in even though they are brand new)

In the event you need a reference point, Serve The Home's own @BLinux has an excellent video on how to shuck here:
Nice. On the bad-blocks tests, can you do them on more than one drive simultaneously on the same machine? Would I need a USB3.0 hub / multiple host ports, or could you even plug these in to a USB2.0 port if you are only running the bad block tests? Actually going to need a power strip that can do that many wall-worts now that I think about it....

Do you do something separate for burn-in or are you counting these tests as the burn-in?

I would expect at least one of them to be bad, you think?

I have some time because I am still building the server these are going to go in. So even if it takes a month to do this it will be OK.

-JCL