Hitachi 3TB Ultrastar SAS HDD 7.2K HUS724030ALS64 (10x for $350 OBO, 20x for $650 OBO) free shipping

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sable

Active Horse
Oct 19, 2016
379
108
43
29
How to ship a few hundred? In 20er boxes?
Where in EU does the drives go?

I made an offer for 10 but didn't get a respone yet. I'll let you know ... maybe to your price, maybe with drive-reports.

Gesendet von meinem ONE E1003 mit Tapatalk
I have a couple pallets with servers coming in 2 weeks. I want to fill the servers with the hdds and maybe put some to the side if there is some extra space.
 

czl

New Member
May 14, 2016
25
4
3
Start a large transfer see if there are any disks that stick out when watching `iostat -xm` for example the await column etc. This gives far more details than just `zpool iostat`. Using this approach I fixed a similar problem when I removed the one bad apple that was spoiling the performance of the entire array.

I have 18 of these drives in a couple of systems. I don't seem to get very good write performance on them. Anyone else noticed this? For example, 8x HDD in ZFS raidz2, doing a 40GB sequential write to them sustains around 400MB/sec. Sequential read seems to be "okay", at around 800MB/sec.

In contrast, I have two other systems with 8x HDD (HGST 4TB SATA 7200RPM from 2014), similar raidz2, and I get 1GB/sec sequential reads, and about 800MB/sec sequential writes.

All systems have LSI SAS2008 based controller with a Supermicro 825TQ backplane.
 

Xamayon

New Member
Jan 7, 2016
25
14
3
I wonder if these are old backblaze drives.
Probably not, I believe they have said they destroy drives when they are done with them. That and they seem to use drives until they die anyway.

The first 20 of the set I ordered arrived. I don't have a system with an hba and free slots ready so won't be able to do smart checks easily. I'll probably just put them in an array and see what happens.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Start a large transfer see if there are any disks that stick out when watching `iostat -xm` for example the await column etc. This gives far more details than just `zpool iostat`. Using this approach I fixed a similar problem when I removed the one bad apple that was spoiling the performance of the entire array.
thanks for the suggestion. i did observe iostat -m earlier and didn't notice anything stand out or unusual with a particular drive. however, elsewhere in this thread, someone finally pointed out to me these drives in the OP are not 7K4000 drives, but rather 7K3000 drives of a previous generation. my other system with 4TB HGST drives are the 7k4000 variety, so I wasn't really making an apples vs apples comparison. I think the 400MB/s seq write across 8x3TB 7K3000 HDD in raidz2 is perhaps expected performance levels for that generation.

last night, i just did a sg_format across all 8 7K3000 3TB drives in order to get them to work in FreeNAS. after that fiasco, I'm still getting the same 400MB/s seq write., so it is behaving consistently. It is interesting to know that there's such a huge write performance difference between 7K3000 vs 7K4000! At least, I feel like I got something more than just an extra 1TB of storage when I paid $75/each for the 4TB 7K4000 drives...
 

Xamayon

New Member
Jan 7, 2016
25
14
3
Wasn't able to pull smart data, the HBA I tried doesn't seem to pass it properly. Performance wise they seem pretty good so far:

3tb sas cdm.JPG
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Wasn't able to pull smart data, the HBA I tried doesn't seem to pass it properly. Performance wise they seem pretty good so far:

View attachment 6933
those SEQ results seem within the ballpark of what I was seeing, so it does seem these drives are some how half as slow on seq writes vs seq reads. nonetheless, at this price, they are great for bulk storage where slow writes doesn't matter.
 

Xamayon

New Member
Jan 7, 2016
25
14
3
when I get a chance I'll try them with the larger sectors on t10 capable controller. Just speculation, but using them as normal 512 disks may degrade write perf due to sequential writes not actually being sequential.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
when I get a chance I'll try them with the larger sectors on t10 capable controller. Just speculation, but using them as normal 512 disks may degrade write perf due to sequential writes not actually being sequential.
i didn't understand most of what you said above? can you please educate me? how are you going to make the sectors larger? and what is a "t10 capable controller"? and how is seq writes not really sequential? <insert mind blown sound here> :D
 

Xamayon

New Member
Jan 7, 2016
25
14
3
i didn't understand most of what you said above? can you please educate me? how are you going to make the sectors larger? and what is a "t10 capable controller"? and how is seq writes not really sequential? <insert mind blown sound here> :D
These disks support multiple sector sizes (iirc, 512, 520, and 528) without altering their reported capacity. There are a few ways they could do this, but one of the worst possible ways I can think of would be to have the sectors always be 528, and just skip the latter part of each sector when writing. >_>;
Doing that would make it non-sequential... I would certainly hope they didn't do it that way, but testing with the sector size set higher with a controller which supports the extended sector size (t10 stuff) might shed some light.

It could also just be that they are optimized for high queue depths, as that had the higher perf, but the huge drop for single is still a bit weird. Who knows, I'm just thinking out loud.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
These disks support multiple sector sizes (iirc, 512, 520, and 528) without altering their reported capacity. There are a few ways they could do this, but one of the worst possible ways I can think of would be to have the sectors always be 528, and just skip the latter part of each sector when writing. >_>;
Doing that would make it non-sequential... I would certainly hope they didn't do it that way, but testing with the sector size set higher with a controller which supports the extended sector size (t10 stuff) might shed some light.

It could also just be that they are optimized for high queue depths, as that had the higher perf, but the huge drop for single is still a bit weird. Who knows, I'm just thinking out loud.
i get what you're saying now! thanks for explaining it...
 

Xamayon

New Member
Jan 7, 2016
25
14
3
I was able to pull smart data off one of them. Not much there, is there a better command to use for SAS drives? Load unload count is a bit high, but that might indicate they weren't being thrashed 24/7 for the past 5 years. Might have just seen periodic usage for backups/archiving or some such:

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.19.0-25-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: HITACHI
Product: HUS723030ALS640
Revision: A120
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Logical block size: 512 bytes
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Logical Unit id: 0x5000cca01ab0bd84
Serial number: YHK46XXX
Device type: disk
Transport protocol: SAS
Local Time is: Fri Nov 10 11:31:23 2017 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature: 21 C
Drive Trip Temperature: 85 C

Manufactured in week 21 of year 2012
Specified cycle count over device lifetime: 50000
Accumulated start-stop cycles: 15
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 57610
Elements in grown defect list: 0

Vendor (Seagate) cache information
Blocks sent to initiator = 8238602911744

Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 0 1654 0 1654 1663 7.762 0
write: 0 4586 0 4586 543 50.422 0
verify: 0 0 0 0 97896 0.000 0

Non-medium error count: 0

SMART Self-test log
Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ]
Description number (hours)
# 1 Background short Aborted (by user command) 1 27 - [- - -]
# 2 Background short Completed - 27 - [- - -]
Long (extended) Self Test duration: 27182 seconds [453.0 minutes]
 

Xamayon

New Member
Jan 7, 2016
25
14
3
Also, I was able to reformat these disks into T10-PI Type2 mode, which provides end to end data protection capabilities when used with a modern SAS controller such as the MegaRaid 9265-8i. More info on T10-PI: https://www.hgst.com/sites/default/files/resources/End-to-end_Data_Protection.pdf

The command to enable T10-PI is (takes 5-6 hours):
sg_format -v --long --format --fmtpinfo=3 --pfu=0

Turning T10-PI back off to return to normal 512 byte sectors:
sg_format -v --long --format --fmtpinfo=0 --pfu=0
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Also, I was able to reformat these disks into T10-PI Type2 mode, which provides end to end data protection capabilities when used with a modern SAS controller such as the MegaRaid 9265-8i. More info on T10-PI: https://www.hgst.com/sites/default/files/resources/End-to-end_Data_Protection.pdf

The command to enable T10-PI is (takes 5-6 hours):
sg_format -v --long --format --fmtpinfo=3 --pfu=0

Turning T10-PI back off to return to normal 512 byte sectors:
sg_format -v --long --format --fmtpinfo=0 --pfu=0
Did any of the reformating change the seq write performance?