Cheap Hitachi 2TB Drives (Data Center Recycle) $30! Free Shipping!

Discussion in 'Great Deals' started by Sleyk, May 17, 2016.

  1. britinpdx

    britinpdx Active Member

    Joined:
    Feb 8, 2013
    Messages:
    348
    Likes Received:
    151
    #421
  2. SavageWS6

    SavageWS6 Member

    Joined:
    Feb 2, 2016
    Messages:
    39
    Likes Received:
    7
    So just got 2 7k300 2TB drives in today. 1 drive isn't looking too good.

    Code:
    Hitachi 2TB SMART data & testing results
    
    Both resealed in anti-static bags w/ moisture packet
    
    No physical damage
    
    Using Hard Disk Sentinel
    
    
    Drive #1
    
    Born - Dec-2011
    
    Health - 100%
    
    Power on time - 1165 days, 5 hrs (27965)
    
    Total start/stop count: 31
    
    
    Drive #2
    
    Born - May-2012
    
    Health - 8% :(
    
    Power on time - 786 days, 14hrs (18878)
    
    Total start/stop count: 23
    
    956 Reallocated Sectors Count
    
    6563 Reallocation Event Count
    
    
    Both passed Self test. Running Extended tests right now on the Dec 2012 drive. Pretty sure I'll just issue a return or something on this one drive, I'll see how it goes overnight. Was REALLY hoping there was no sector issues.

    Samsung HD502UJ 500GB drive in my desktop to off load downloads from my SSD, 28585 hours. Power on count 2329. No issues and drive is running 100% still. Manufacture date is 01/2009

    Ran Extended Test both on the drives. The bad drive #2 actually finished before the good drive #1. Ran a quick sequential test and they are performing top notch. On my SAS/SATA Backplane to LSI 2308 on my SM board.
     
    #422
    Last edited: Aug 16, 2016
  3. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    Yeah, its a funny thing with some drives my friends.

    I got a 6% health Seagate 2TB that still transfers and runs well. I suppose since it is so low on health, one day it might just stop and give up its magnetic spinning ghost. I plan to bury it peacefully. No funeral proceedings either. It will be remembered for the good things it did in its digital life. How it faithfully transferred my porn and hentai collections to their new WD and Hitachi homes. You know, the noble things. That's how it would want to be remembered... :oops:
     
    #423
  4. HorizonXP

    HorizonXP Member

    Joined:
    May 23, 2016
    Messages:
    73
    Likes Received:
    2
    So I bought 6 3TB drives, and I'm only able to test 4 of them right now. Ran badblocks on them overnight, and they came back clean. Running a SMART long test on them now. Most of them range from 20k to 25k hours, and they look great. Dated June 2012, and it looks like the seller ran SMART short tests on them.

    Curiously, one drive has power-on time of 12700 hours, no SMART tests run on it, and has this in the SMART error logs:
    Code:
    SMART Error Log Version: 1
    ATA Error Count: 3
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    
    Error 3 occurred at disk power-on lifetime: 12662 hours (527 days + 14 hours)
      When the command that caused the error occurred, the device was active or idle.
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      40 51 06 fa 5e 6d 06  Error: UNC at LBA = 0x066d5efa = 107831034
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      60 08 a8 f8 5e 6d 40 00   2d+10:16:57.917  READ FPDMA QUEUED
      61 38 a0 a8 10 be 40 00   2d+10:16:57.917  WRITE FPDMA QUEUED
      61 08 98 88 10 be 40 00   2d+10:16:57.917  WRITE FPDMA QUEUED
      61 10 90 b0 bf b1 40 00   2d+10:16:57.917  WRITE FPDMA QUEUED
      61 00 88 50 1e e3 40 00   2d+10:16:57.917  WRITE FPDMA QUEUED
    
    Error 2 occurred at disk power-on lifetime: 12662 hours (527 days + 14 hours)
      When the command that caused the error occurred, the device was active or idle.
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      40 51 06 fa 5e 6d 06  Error: WP at LBA = 0x066d5efa = 107831034
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      61 00 a8 c0 f2 f0 40 00   2d+10:16:55.738  WRITE FPDMA QUEUED
      61 00 a0 c0 ee f0 40 00   2d+10:16:55.737  WRITE FPDMA QUEUED
      61 00 98 c0 ea f0 40 00   2d+10:16:55.737  WRITE FPDMA QUEUED
      61 00 90 c0 e6 f0 40 00   2d+10:16:55.737  WRITE FPDMA QUEUED
      61 00 88 c0 e2 f0 40 00   2d+10:16:55.737  WRITE FPDMA QUEUED
    
    Error 1 occurred at disk power-on lifetime: 12662 hours (527 days + 14 hours)
      When the command that caused the error occurred, the device was active or idle.
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      40 51 a6 fa 5e 6d 06  Error: UNC at LBA = 0x066d5efa = 107831034
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      60 00 00 a0 5e 6d 40 00   2d+10:16:51.318  READ FPDMA QUEUED
      60 00 00 a0 5d 6d 40 00   2d+10:16:51.317  READ FPDMA QUEUED
      60 00 00 a0 5c 6d 40 00   2d+10:16:51.317  READ FPDMA QUEUED
      60 00 00 a0 5b 6d 40 00   2d+10:16:51.316  READ FPDMA QUEUED
      60 00 00 a0 5a 6d 40 00   2d+10:16:51.315  READ FPDMA QUEUED
    Any idea what those errors might mean? Like I said, badblocks came back with no bad sectors, and I'm running the SMART long tests right now.
     
    #424
  5. SavageWS6

    SavageWS6 Member

    Joined:
    Feb 2, 2016
    Messages:
    39
    Likes Received:
    7
    Ya, So far, Extended Self-Test, Surface Test are reporting the drive is in great condition. Already did the whole Write phase, and now its 90% of the read phase and still no issues, plus the bad drive is faster than the good drive.. lol. I actually may just keep the drive.

    You got any other test system/controller you can use? I've seen badblocks across various websites and even badblocks can report false positives, not saying your test did, but I use multiple tests. Also with READ and WRITE FPDMA QUEUED, looks like that's pertaining to your controller. I see it across Arch Linux, Ubuntu, Debian forums if I keep digging.
     
    #425
  6. NobleX13

    NobleX13 Member

    Joined:
    Oct 2, 2014
    Messages:
    44
    Likes Received:
    9
    #426
    Last edited: Aug 17, 2016
  7. SavageWS6

    SavageWS6 Member

    Joined:
    Feb 2, 2016
    Messages:
    39
    Likes Received:
    7
    Sometimes I wonder... my 8% "dead" drive is my top performer, and passes with flying colours on every test I throw at it. [​IMG]
     
    #427
    Last edited: Aug 17, 2016
  8. Flintstone

    Flintstone Member

    Joined:
    Jun 11, 2016
    Messages:
    126
    Likes Received:
    21
    Over the years some odd drives from my pools of 30 or so 2TB Samsung drives have been suspected by me to be faulty. Almost always the problem has been cables. Either loose ones or defective ones (I have trashed a lot of cheap fan-out cables). Just one of these 30 or so now quite old drives (purchased at a lower $/GB than even now) have had a physical and devestating crash - and I got a new one under warranty. I am going to do a badblock run on all of them now soon, so I expect some skeletons, but they have been great!
     
    #428
  9. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    thats true. Cables sometimes are my hidden culprits. I buy sata cables cheap though. You know, several hundred of them for a dollar...
     
    #429
  10. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    #430
    NobleX13 likes this.
  11. NobleX13

    NobleX13 Member

    Joined:
    Oct 2, 2014
    Messages:
    44
    Likes Received:
    9
    Thanks for the heads-up. I just got a 12-bay DAS module; too bad this is limit 5.
     
    #431
  12. HorizonXP

    HorizonXP Member

    Joined:
    May 23, 2016
    Messages:
    73
    Likes Received:
    2
    Just completed my badblocks and SMART long test run on the 6 2TB HDDs that I bought a couple of weeks ago. They all passed with 0 bad blocks. Very happy with these drives considering that they're working 100%, and that they were so cheap!

    And the 6 3TB drives have been working flawlessly thus far. Been setting these up in ZFS mirror vdevs. Not very space efficient, but should make it really easy to upgrade capacity in the future when I come across an even better deal.
     
    #432
    Last edited: Sep 1, 2016
  13. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    Sure thing NobleX! I think Newegg allows you to place another order on the same item I think after 48 hours. I found that out when I came across a sweet deal on a 3 x 5.25 to 5 x 3.5 drive enclosure. Thing is, the deals usually go so fast that after you come back in 48 hours, the items are usually sold out. I think these drives on sale should hold out just alittle longer hopefully though.
     
    #433
  14. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    I have been struggling so much between deciding if I want mirrored vdevs or just using raidz1. I know the performance is there with mirrors, but darn it, I can't get over the loss of capacity. It bothers me. I'm greedy. :(
     
    #434
  15. dragonme

    dragonme Active Member

    Joined:
    Apr 12, 2016
    Messages:
    282
    Likes Received:
    28
    everyones use cases are different so I will start with that.. everyones redundancy cases are different I will admit to that.. but some food for thought...

    if you are not running a corporate database... does your system need to keep running on a storage fault or during the rebuild?

    do you need up to the minute backups.. or does every couple days as major changes are made work...

    I use ZFS and have been since about 2008 on the Mac... here is what I do.. your milage may require a diffent method but I find that it optimizes:
    1 current storage to no more than needed with 100% storage for the prmary pool .. no overhead
    2 expansion at reasonable cost
    3 backup that meets my tolorance needs...

    what I do is that I will add drives in pairs into a large stripe with checksum but NO PARITY.. ie no raidz and no mirrors... this alows me to check the data for bitrot and that file system cathes errors on every read anyway.. 2 drive stripes more than max out giga ethernet...

    My backup is a 8 drive raidz.. this gives me parody and in my opinion enough survivability to replace anything that gets corrupted on the primary pool and heal the backup if needed.. worst case a drive fails on the pirmary pool.. destroy pool, add new drive, copy from backup... I NEVER intend to rebuild a raidz usually... I DONT need to keep it up live while fixing it.. the only reason to do it.. faster and less thrashing to just rebuild it from scratch with added bennie of cleaning house if the pool is fragmented as zfs cant fix that...

    so I add 2 drives at a time to enlarge my primary pool when needed.. allows me to buy the drives cheaper and only when needed. cheaper because drive prices drop precipitously with time... if the pool started with 2tb drives.. I keep them 2tb drives until I max up to 6

    once I fill 6 drives.. instead of adding 2 more... I get 2 more drives of the old size and 2 of the largest new capacity... usually enough to hold all 6 or alot more of the old size by now...

    I strip the new ones, copy files from the 6 old ones... destroy the old pool of 6... raidz the 8 drives and add that raidz to the existing backup pool.. now its stripped raidz... rinse repeat...

    doing this as we speak.. my old pool of 2TB drives is beginning to max out.. I have about 11TB in storage.. bought 2 new 8TB drives.. stripe em... send/recieve the pool over, destroy the old pool and rebuild as a 8 drive raidz 8x2TB and add it to my existing backup pool.. now have primary on and spinning only 2 drives 24/7 and have a 16 drive, 2 x 8x2TB raidz backup array that is only powered up when making weekly backups of the primary pool... any data moved to the primary pool is also kept on it source until the backups are made

    cost of the drives amoritzed over 8 years....

    data always in 2 places

    once drives in the backup pool get to an age or replaced with a larger disk size.. they come off - line for cold storage pools.

    I cant imagine how long its going to take to max out from 2x8TB to 6x8TB but I am guessing at least another 6-8 years...
     
    #435
    Sleyk, nthu9280 and wsuff like this.
  16. Sleyk

    Sleyk Active Member

    Joined:
    Mar 25, 2016
    Messages:
    785
    Likes Received:
    212
    That's awesome Dragonme! Yeah, I feel like raidz1 can meet my needs just the same as mirrors. Transfer speeds are pretty decent on raidz1. The way how zfs works too, is that you get the speed of the ram as a ram disk anyway, and then this shoots over to the drives right? Seems sufficient for me for now I guess.

    I really do wish I could get the mirrored performance though, without sacrificing the capacity, but as you said, it should depend on your use case. I do have a 10Gb network setup at home, but even then, I can certainly wait. I use SSD's to transfer from media/download pc to my server, and I usually max out my gigabit connection. Then, when I switch to 10Gigabit, I see 450MB/s transfers using my Samsung Evo 120GB download/transfer drive. I actually went as far as building a exact duplicate of my server and I put an equal amount of drives and pools between the two and was thinking of just setting up a wake on bios event type thing on the 2nd server, and doing a cron job on the first server to rsync over data (or send/receive like how you do it like a boss mate! :cool:) and backup that way.
     
    #436
  17. dragonme

    dragonme Active Member

    Joined:
    Apr 12, 2016
    Messages:
    282
    Likes Received:
    28
    raidz1 is not a speed deamon.. it will be limitied to the speed of the slowest drive in the group

    that is why I just large stripe my online pool.. read and write faster with move devs added and just raidz my backup pool

    also.. only way to increase size of a raidz is to add another raidz in stripe or destroy and rebuild... pain in the ass.. I can add any numbers of drives to a stripe at any time

    and you have been around but for others here.. raidz is NO SUB for a backup.. must still have that
     
    #437
  18. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    723
    Likes Received:
    193
    So long as you keep good backups and don't mind a little down time while you copy things around and rebuild, that's not a bad way to go. As the online pool has zero redundancy, one drive going takes the whole thing out. The way you're doing it with ZFS send/recv, the amount of data you could possibly lose is limited to the differences from the last snapshot/send. That might not be much, and it might not matter anyway so long as the data exists elsewhere as well. More than one way to skin the cat. :)
     
    #438
  19. Klee

    Klee Well-Known Member

    Joined:
    Jun 2, 2016
    Messages:
    1,239
    Likes Received:
    374
    Thanks Sleyk, I just ordered 5.
     
    #439
  20. UluLaz

    UluLaz New Member

    Joined:
    Sep 10, 2016
    Messages:
    9
    Likes Received:
    0
Similar Threads: Cheap Hitachi
Forum Title Date
Great Deals Super Cheap Hitachi UltraStar C10k600 450GB 2.5" TCG Encryption Hard Drives Apr 16, 2016
Great Deals UK Cheap Fusion-io £150 3.2TB and £400 6.4TB PCI-e SSDs Aug 4, 2019
Great Deals cheap Supermicro 836 chassis $135+S&H Jul 1, 2019
Great Deals HGST 6TB SAS Hard Drive $90 or cheaper ->$50;) - Sold Out - Jun 21, 2019
Great Deals Plenty cheap Intel SSD boot drives May 22, 2019

Share This Page