My Best Buy online order was delayed. I wonder if they're gonna cancel it.
Yes, I'd either (a) go directly to the store w/o ordering online (risk = stock depletion), or (b) use a spouse's BB account to purchase online (or similar) for physical pickup (noted risk = mitigated). Regarding the former, I cite that risk as ATL is sold out except for stores 1 hour 15 minutes away ... not sure what the case is in your locale.So, I ordered 5 of this online for in-store pickup, since that was the limit.
If I physically go to another store that has some, is that how you get past it?
-JCL
Nice!I just got notice that mine shipped.
I only got 4. But now wish I got 5.Nice!
How many? I had trouble placing orders @ qty = 5 (despite that being the purported limit).
4 worked for me / 5 did not ... buy another one!I only got 4. But now wish I got 5.
Bear in mind that once you account for parity and free space, that 50 TB drops quite quickly. Using ZFS as an example (and 5 drives is a bit of an odd # to look at ... if I was thinking about deploying 5 drives, I'd either run 1 less as 2 mirrored pairs, or 1 more as 6 drives in raidz2):What do you people do with all this storage, 4k blue ray rips? I really don't see where else I could possibly use 50T of space...
Of course with drives this size and UREs (uncorrectable read error) rate of 10^14, (, RAID 5 or RAIDZ1 or any other single parity offering is not much better than no redundancy. RAID10 is the best best, and in that scenario my setup is as efficient and also takes into mind hardware failure outside of just the drives plus the enhanced performance.Bear in mind that once you account for parity and free space, that 50 TB drops quite quickly. Using ZFS as an example (and 5 drives is a bit of an odd # to look at ... if I was thinking about deploying 5 drives, I'd either run 1 less as 2 mirrored pairs, or 1 more as 6 drives in raidz2):
But more to the point, much capacity = many "Linux ISOs" (you may have to google that to get the joke).
- raidz1 = single parity = 39 TB ZFS usable (S/E = 77%) / 31 TB practical usable (20% free space allowance)
- raidz2 = double parity = 29 TB ZFS usable (S/E = 57%) / 23 TB practical usable (20% free space allowance)
- Usually your storage efficiency for raidz1 (typically not advised) and raidz2 are higher due to more than 5 drives in a pool.
Of course with drives this size and UREs (uncorrectable read error) rate of 10^14, (, RAID 5 or RAIDZ1 or any other single parity offering is not much better than no redundancy.
Is it something I would use in an enterprise setting - no.
Is it for everyone - no.
I love these conversations. The only issue that I see with RAID6 and ZFS2, is again UREs. That has to be a huge concern for drives this large and the only true way around it is RAID10. ZFS, although better than most other file systems does not negate the danger at all. Keep in mind, it is NOT about the failure of drives, it's about uncorrectable read errors on the rebuild after the failure.
Normal SATA drives (like these ES10TBs) are URE 10^14. With 10^14 drives, you hit a URE roughly every ~12TB of reads.
In other words, that means that you expect a RAID 5 resilver to fail if your usable array size is 6TB or larger, pretty small by today's standards. Although dual parity improves this, it is going the way of the dodo (and single parity) as drive sizes increase. RAID10 does handle this much better, but at a cost of 50% of raw sapce, and long rebuilt times,. I am still happier with my setup - for my needs. Plus, it brings additional hardware fault tolerance. In a case of a failed drive (or powersupply, or anything), I initiate my script, and am back up in under two minutes. They I can concentrate on my failed array.
I'd be MUCH more nervous re-createing a 50 (well, mins 2 drives) array with RAID6 or RAIDZ2. Just a few URE's and the rebuild will likley fail. Not to mention, how many DAYS do you think a 50TB array would take to bring back to optimal. Much longer than 12-15 hours. More likely 5 days or more.
I did FreeNAS for many years. And, it worked well for me. I just got tired of the slowness of rebuilds and the updates that broke things (like when they broke all usernames passwords in the database). Not to mention the half-ass support of SMB/CIFS. I will say that my new server 2019 runs circles around my FreeNAS with MUCH less memory. And CyberJockey is an arse.
Disclaimer: This works fine for a homelab, and all my critical data is backed up offsite. I wouldn't put this configuration in an enterprise setting. But the speed and easy of configuration changes is wonderful.
Nice. On the bad-blocks tests, can you do them on more than one drive simultaneously on the same machine? Would I need a USB3.0 hub / multiple host ports, or could you even plug these in to a USB2.0 port if you are only running the bad block tests? Actually going to need a power strip that can do that many wall-worts now that I think about it....Very nice ... Very slick on the no sales tax ...
I'm going to have to wait until Wednesday for all 12 of my flash drives to arrive before construction of the Flash Drive Raid Array begins
It is a tad anticlimatic when you get them in your hands as it literally takes a week to run the 4 badblocks patterns, book-ended by SMART tests ... (I'd advise a proper burn in even though they are brand new)
In the event you need a reference point, Serve The Home's own @BLinux has an excellent video on how to shuck here: