DEAD: HGST SSD1600MM - HUSMM1640ASS201 - 400GB US $47.95 OBO

awedio

Active Member
Feb 24, 2012
463
81
28
5 mins later...success!!
Code:
[root@localhost ~]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 111.8G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 110.6G  0 part
  ├─fedora-root 253:0    0 106.6G  0 lvm  /
  └─fedora-swap 253:1    0     4G  0 lvm  [SWAP]
sdb               8:16   1  29.1G  0 disk
├─sdb1            8:17   1     3G  0 part
└─sdb2            8:18   1   9.8M  0 part
sdc               8:32   0 372.6G  0 disk
sdd               8:48   0 372.6G  0 disk
sde               8:64   0 372.6G  0 disk
sdf               8:80   0 372.6G  0 disk
sdg               8:96   0 372.6G  0 disk
sdh               8:112  0 372.6G  0 disk
sdi               8:128  0 372.6G  0 disk
sdj               8:144  0 372.6G  0 disk
 

itronin

Active Member
Nov 24, 2018
371
223
43
Denver, Colorado
And they're back... same seller.

But at $59.00 each so not so sure this is a great deal... perhaps simply good? pricing is roughly 60% of other sellers.

66 drives available according to my curiosity.

I did receive a 10% ebay bucks offer today so that makes the effective price $54.00 assuming I need to purchase something during the next redemption period.

I'll update the original post.
 
  • Like
Reactions: awedio

josh

Active Member
Oct 21, 2013
305
86
28
And they're back... same seller.

But at $59.00 each so not so sure this is a great deal... perhaps simply good? pricing is roughly 60% of other sellers.

66 drives available according to my curiosity.

I did receive a 10% ebay bucks offer today so that makes the effective price $54.00 assuming I need to purchase something during the next redemption period.

I'll update the original post.
He had 300+ in inventory from the start. He just pulled the leftovers and started a new listing at a higher price. I was going to buy more as spares but I decided not to reward his greed.
 

awedio

Active Member
Feb 24, 2012
463
81
28
Update:
Using a SM SYS-1028U-TNRT+, the drives show up. This is a SAS3 backplane w expander
 

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
man these things are awesome.. Bought 4, they all showed zero usage...is it possible (likely) the seller wiped SMART data?
 

itronin

Active Member
Nov 24, 2018
371
223
43
Denver, Colorado
Its possible. I think its more likely old new stock or field service spares that have been auctioned off. In my 21 drive order, the loose drive was manufactured March 2017, the other 20 drives came in a manufacturer sealed box and included silica gel packs in each drive's sealed anti-static bag. To me that says new, old new stock, manufacturer refurb. I could find no physical signs of wear or installation of the drives, no scratches, no thread use etc. etc.

I too think the drives are awesome. Whether 53, 55, or even 60 dollars. very pleased with this deal. I'm hopeful these drives are like timex watches.
 

Ryan B

New Member
Jul 9, 2016
4
4
3
33
Update:
Using a SM SYS-1028U-TNRT+, the drives show up. This is a SAS3 backplane w expander
Are you able to share any disk benchmarks using 12 gbps SAS? I would love to see CrystalDiskMark results for a single drive if possible.
 

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
Are you able to share any disk benchmarks using 12 gbps SAS? I would love to see CrystalDiskMark results for a single drive if possible.
I ran a quick and dirty test with 4 in RAID 10 on my new Quanta D51B-1u (from another hot deal here). It has a 12Gbps backplane hooked up to a 12G lsi card. running
Code:
dd if=/dev/zero of=/test.data bs=1M count =80000
to generate a file larger than memory, I got ~1.4GBps. Reading back off was a little worse,
Code:
dd if=/test.dat of=/dev/null bs=1M
yielded ~680MBps.

Not super scientific, and the machine WAS under very light usage, but that might give you a data point. Again, 4 drives in RAID10 WRITE THROUGH /NO READ AHEAD caching.
 

josh

Active Member
Oct 21, 2013
305
86
28
I ran a quick and dirty test with 4 in RAID 10 on my new Quanta D51B-1u (from another hot deal here). It has a 12Gbps backplane hooked up to a 12G lsi card. running
Code:
dd if=/dev/zero of=/test.data bs=1M count =80000
to generate a file larger than memory, I got ~1.4GBps. Reading back off was a little worse,
Code:
dd if=/test.dat of=/dev/null bs=1M
yielded ~680MBps.

Not super scientific, and the machine WAS under very light usage, but that might give you a data point. Again, 4 drives in RAID10 WRITE THROUGH /NO READ AHEAD caching.
Any reason for RAID10 on SSDs? Isn't RAID5 the preferred mode?
 

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
Any reason for RAID10 on SSDs? Isn't RAID5 the preferred mode?
Having worked for a storage company for a few years, it depends. If you care about size, raid 5 is better. With SSD URE rate (10 in 10^17, or something like that), your chance of an error during rebuild is lower. However, you still have to pound all remaining SSDs to rebuild the missing drive, and both at my previous company (cloud storage) and my current company (network/cloud security), I have seen more than my fair share of SSD based RAID5 arrays suffer dual or even triple drive failures. Now, I'll admit the current company was doing 24 drive raid 5 with no hotspares, so that's even worse, but I wouldn't risk it. Raid 10 is faster (again negligible with these ssds) bit only rebuilds from one drive not all. And it rebuilds faster due to no parity calculations.

*Forgive grammar and spelling, writing on cell phone
 
  • Like
Reactions: Samir

josh

Active Member
Oct 21, 2013
305
86
28
Having worked for a storage company for a few years, it depends. If you care about size, raid 5 is better. With SSD URE rate (10 in 10^17, or something like that), your chance of an error during rebuild is lower. However, you still have to pound all remaining SSDs to rebuild the missing drive, and both at my previous company (cloud storage) and my current company (network/cloud security), I have seen more than my fair share of SSD based RAID5 arrays suffer dual or even triple drive failures. Now, I'll admit the current company was doing 24 drive raid 5 with no hotspares, so that's even worse, but I wouldn't risk it. Raid 10 is faster (again negligible with these ssds) bit only rebuilds from one drive not all. And it rebuilds faster due to no parity calculations.

*Forgive grammar and spelling, writing on cell phone
But were they using these HGST models which have supposedly much higher endurance ratings? I'd love to hear about how those companies recovered from 24 drive RAID5 failures. RAID10 seems like a bigger risk if you're storing data across multiple RAID10 sets no? I guess your drive count is too small for RAID6.
 
  • Like
Reactions: Samir

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
But were they using these HGST models which have supposedly much higher endurance ratings? I'd love to hear about how those companies recovered from 24 drive RAID5 failures. RAID10 seems like a bigger risk if you're storing data across multiple RAID10 sets no? I guess your drive count is too small for RAID6.
"Recovery" consisted of replacing failed ssds and then a multi day MySQL replication task to repopulate the database server...sometimes with another drive dying during the copy. They were not HGST drives, we were/are using Samsung and/or Intel drives. 24 drive raid 5 is a terrible idea, raid 6 would've been preferred, but management wanted "as big as we could get it". In some cases they had build 24 drive Raid0 arrays for MySQL servers (before I got there). My first 6 months I had a database server blow up almost weekly from 1-3 drives dying in rapid succession. Also, VMware recommends raid10 for datastores, and as I mentioned you don't hit a rebuild penalty on your other disks. Raid 10 is at least as bad as Raid 5 in that if you loose two drives on the mirror, you're toast, but if you loose one from each mirror, it's almost like raid 6 with ability to lose two. I'll take those odds, I have some spares lying around as long as i catch the first fail soon enough
 

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
Anyone else run into this issue? I put the controller into JBOD mode so I could access these, formatted them, then changed controller back to JBOD=off using storcli, but when i put the drives in another server with the same controller, I get this:

Code:
------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model            Sp Type
------------------------------------------------------------------------------
252:0    15 Onln   0 372.093 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  -
252:1    14 Onln   0 372.093 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  -
252:2    13 Onln   0 372.093 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  -
252:3    12 Onln   0 372.093 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  -
252:4    16 Onln   - 372.611 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  JBOD
252:5    17 Onln   - 372.611 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  JBOD
252:6    18 Onln   - 372.611 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  JBOD
252:7    19 Onln   - 372.611 GB SAS  SSD Y   N  512B HUSMM1640ASS20E  U  JBOD
------------------------------------------------------------------------------
the 4 that say JBOD are new, the previous 4 exist. I can't figure out how to clear the JBOD type.

nvm i had to change personality from jbod back to RAID
 
Last edited:
  • Like
Reactions: Samir

SPCRich

Active Member
Mar 16, 2017
154
47
28
39
I just learned a valuable lesson. DO NOT secure erase these. If you do, the drive no longer shows up in sg_scan, and the controller will say it's 0KB in size. I can't figure out how to get 2 of mine to come back. I was doing a background init but it was taking a while so i thought I'd do a secure erase to speed it up. Now I have 2 bricked drives.
 
  • Like
Reactions: Samir

josh

Active Member
Oct 21, 2013
305
86
28
Hey guys, I acquired more of these drives from another seller but the writes seem a little high, between 80-300TB and power on hours around 1.4k. Should I get more or wait for fresher drives?
 
  • Like
Reactions: Samir