[Update: Seller Complaints Accumulating] HGST Ultrastar He10 - 10TB @ $129.95

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,568
1,674
113
49
HSV and SFO
Well i don't disagree, i'm just doing home stuff here. Again, second hand SAS drives used to be available alot cheaper than SATA drives per terabyte, and that was a huge incentive to get a SAS controller. Lately looking at the fleabay markets, that advantage is pretty much gone here in europe. I rest my case, have no arguments.

edit: there could be an argument that running SATA drives over pcie lanes is no less 'professional', infact, it's less complicated and removes complications like bad and/or corrupted SAS controllers, expensive and possibly bad cables, plus the extra conversion between standards. SAS used to be faster and more reliable, now it's no faster at all, and the physical drives are mostly all the same just with different PCB's (nearline sas essentially).

but i digress, your arguments are just as valid.

edit2: another issue is the ridiculous SAS cables, which requires A SATA power connector on top of the SAS connector. In many instances where the case isn't designed for that it makes for a flimsy connection that protrudes alot further out from the drive, in some cases like mine, i had issues getting the side cover on my tower case to fit because the connector was so bulky.

edit3: seems like i did have arguments after all :cool:
I think your arguments are invalid or enterprise would be pushing sata vs sas in configurations > 4 drives.
 

twin_savage

Member
Jan 26, 2018
86
44
18
34
I think your arguments are invalid or enterprise would be pushing sata vs sas in configurations > 4 drives.
SAS still has benefits over SATA, it's signaling voltages are higher, more than double SATA's, greatly increasing it’s ability to deal with marginal quality cables and EMI. Between the higher voltages and the dual port design, this is why SAS drives will use more power than SATA drives.

SAS drives also enjoy an expanded command set which has real world benefits if data read issues are encountered.

This is less of a hard and fast rule now a days but SAS drives used to have components that were binned to a higher quality than what was put in SATA drives (platters, heads, vc amps).



edit: my quoting didn't come out right. I was agreeing with you Samir.
 

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,568
1,674
113
49
HSV and SFO
SAS still has benefits over SATA, it's signaling voltages are higher, more than double SATA's, greatly increasing it’s ability to deal with marginal quality cables and EMI. Between the higher voltages and the dual port design, this is why SAS drives will use more power than SATA drives.

SAS drives also enjoy an expanded command set which has real world benefits if data read issues are encountered.

This is less of a hard and fast rule now a days but SAS drives used to have components that were binned to a higher quality than what was put in SATA drives (platters, heads, vc amps).



edit: my quoting didn't come out right. I was agreeing with you Samir.
Yep and that goes back to the origins of SAS as Serially Attached Scsi because SCSI has different variants such as differential which used better signalling to deal with the increased number of drives on a single channel compared to its consumer rival, IDE or PATA. And if we look at the origins of SATA, it evolved from the bottom, IDE, which depended on the host cpu for data transfers and was a pretty 'dumb' interface while SCSI didn't use host CPU, had its own set of commands and could do things such as multiple commands between hosts and initiators on the bus--things which only came to SATA as it and the SAS interface finally started sharing many things, right down to some parts of the hardware.

But as you mentioned, SAS is still 'the real deal' when it comes to drives if you want the best in storage. It's why it's the standard in enterprise and why even backblaze in their quest to make cheap storage were at one point was using sas controllers and expanders vs multiple sata controllers.

I was always pro SCSI back in the day, so being pro SAS is just par for the course for me. :D
 

eduncan911

The New James Dean
Jul 27, 2015
648
507
93
eduncan911.com
So wow, this thread got super long. It's been running since earlier this year. This is for the guy on ebay who sells the 10TB SAS drives. I always meant to purchase from him, but I just didn't get around to it. Then he changed to the paid shipping model some months back, and I just deleted him from my watch list, as I try to avoid sellers who make you pay for return shipping. I see why they did it, but I still avoid them.

So what is the consensus now? The seller is sending out poor health drives?
Consensus is:

- buy at your own risk as the seller has numerous complaints of customer service / post - purchase support. Like, returns, wrong orders, missing parts etc. They will basically ghost you until you open an eBay claim.

- all drives are 4yr to 5yr old with 50,0000 hours (24/7/365 on for 5 years) with huge read and writes during that whole time.

- most drives seem to be fine. but don't invest any real money from this vendor as it's a old hardware usage risk of continued use.
 
  • Like
Reactions: Samir and Sleyk

mikevipe

New Member
Aug 15, 2020
3
7
3
Hello,

I was searching some SMART errors (FIRMWARE IMPENDING FAILURE SEEK ERROR RATE TOO HIGH), and found this thread. The seller deals2day-364 is the same seller we have been purchasing our 8TB and 10TB SAS drives from. All together we have purchased over 260 drives and only had about 9 failures over the last year, that's less than 3.5% failure rate. 6 of those failures happened within the first 20-40 days and even the few that were over the 30 day window the seller graciously accepted and exchanged. The ones which failed 60+ days we just ate and threw away as we initially ordered expecting a 5-10% failure rate on such old drives. Drives are a mixture of 2015, 2016 and 2017 purchased over the past year with between 35k to 55k hours.

My thoughts are since the majority fail out of the box it could be due to shipping and handling. We have all seen the way UPS and FedEx kick these boxes for more than a few field goals.

When buying this kind of hardware we expected higher rates of failure. Our main nodes are running 18 and 20TB Seagate SAS EXOS drives which makes these a pretty good deal comparatively for our daily and or weekly cold storage backups of critical infrastructure giving us two offsite nodes with around 500TB each of usable raid protected storage.
 
  • Like
Reactions: itronin

Magnet

Active Member
Jan 25, 2018
213
160
43
North Florida
What are the non-homelab use cases for these used drives? I see folks buying tons and assume they are using them in a business scenario? I've never bought used drives for an enterprise scenario, but I guess it all depends on use case, budget and how you configure/store/backup your important data.
 
  • Like
Reactions: Samir

mikevipe

New Member
Aug 15, 2020
3
7
3
What are the non-homelab use cases for these used drives? I see folks buying tons and assume they are using them in a business scenario? I've never bought used drives for an enterprise scenario, but I guess it all depends on use case, budget and how you configure/store/backup your important data.
We use them in our off-site backup setups in an enterprise senario. Our sites have moved to full solar+Tesla battery backups so power is not an issue as it is in the datacenter. All of our primary servers run brand new EXOS SAS drives, using the cheaper used drives allows us to maintain two separate off-site backups with one acting as a mirror/failover for less than the cost of using brand new EXOS drives for a single off-site backup. We have found about a 1-1.5% failure rate in the first year with the EXOS drives so 3.5% for the used drives at less than a third of the cost is an acceptable tradeoff for us. With the use of multiple hotspares and the ability to replace the failed drives in minutes to an hour it has been well worth the savings.
 

Fritz

Well-Known Member
Apr 6, 2015
3,528
1,488
113
70
As far as data is concerned there is no difference between SATA and SAS drives. I prefer SAS drives because they're faster and because they don't fail as often. Did I mention they're cheaper when bought used?
 

kevindd992002

Member
Oct 4, 2021
122
6
18
I think I've managed to get my self into a bit of an run-a-round situation.
So, the one long test that has been endlessly running as far as the report is concerned isn't actually running. It's a remnent log. Because I started the test but didn't let it finish it just stays in the log but it's not actually running and really isn't important. I started a new -t short and watched it show a new status where it went through then showed complete in the #1 spot and then I started a -t long where its now showing progress.
I think the long showing the 65535 seconds[1092.2 minutes] will stay there as a reference at all times and that was what was tripping me up. I was thinking it was a progress bar and it should be counting down.
How ever running the test again now shows above with x.xx% remaining. It's a reverse counter so 100% down to 0% rather then 0% to 100%

Now as far as the BMS side of things. your redit re-post makes sense. Since it's running and a background not worried about it.
Nothing like trying to figure something out and mistaking something for something else.

Now we just need to figure out why we can't poll the vender specific information. Cause I would very much like to know the helium levels also so as it will be a place holder on when to replace a drive.

Code:
Self-test execution status:             100% of test remaining
SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background long   Self test in progress ...   -     NOW                 - [-   -    -]
# 2  Background short  Completed                   -   28470                 - [-   -    -]
# 3  Background long   Aborted (by user command)   -   28417                 - [-   -    -]
# 4  Background long   Self test in progress ...   -     NOW                 - [-   -    -]
# 5  Background short  Completed                   -   28250                 - [-   -    -]

Long (extended) Self-test duration: 65535 seconds [1092.2 minutes]
Code:
Self-test execution status:             99% of test remaining
SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background long   Self test in progress ...   -     NOW                 - [-   -    -]
# 2  Background short  Completed                   -   28470                 - [-   -    -]
# 3  Background long   Aborted (by user command)   -   28417                 - [-   -    -]
# 4  Background long   Self test in progress ...   -     NOW                 - [-   -    -]
# 5  Background short  Completed                   -   28250                 - [-   -    -]

Long (extended) Self-test duration: 65535 seconds [1092.2 minutes]
I'm having a similar situation with an HGST drive that I just bought. Initially, I was trying to do a background long smartcl test but it was erroring out that it is an unsupported scsi command. So I tried running a foreground long (with the -c switch) test but I didn't know if it didn't have any progress bar or so. So after a few minutes, I thought it was stuck and I exited out of the terminal window. I reconnected and run another smartcl -a command but that seemed to not want to spit out any information now. I rebooted the machine to be able to run smartctl again. I did this twice and get this now:

Code:
root@epsilon:~# smartctl -x /dev/sdb
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-21-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               HITACHI
Product:              HUH72808CLAR8000
Revision:             M7K0
Compliance:           SPC-4
User Capacity:        8,001,563,222,016 bytes [8.00 TB]
Logical block size:   4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000cca26110f8a4
Serial number:        VJG9AALX
Device type:          disk
Transport protocol:   SAS (SPL-4)
Local Time is:        Sat May 18 20:48:28 2024 PST
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Disabled or Not Supported
Read Cache is:        Enabled
Writeback Cache is:   Disabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:     45 C
Drive Trip Temperature:        60 C

Manufactured in week 03 of year 2017
Specified cycle count over device lifetime:  50000
Accumulated start-stop cycles:  219
Specified load-unload count over device lifetime:  600000
Accumulated load-unload cycles:  1594
Elements in grown defect list: 0

Vendor (Seagate Cache) information
  Blocks sent to initiator = 10392712251441152

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:          0        2         0         2    4927247      67882.600           0
write:         0        1         0         1    3647684      54584.982           0
verify:        0        1         0         1      99455          0.952           0

Non-medium error count:        0

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Completed                   -   53640                 - [-   -    -]
# 2  Foreground long   Self test in progress ...   -     NOW                 - [-   -    -]
# 3  Background short  Completed                   -   53598                 - [-   -    -]
# 4  Foreground long   Self test in progress ...   -     NOW                 - [-   -    -]
# 5  Background short  Completed                   -   53596                 - [-   -    -]
# 6  Background short  Completed                   -   52134                 - [-   -    -]
# 7  Background short  Completed                   -   50647                 - [-   -    -]
# 8  Background short  Completed                   -   49206                 - [-   -    -]
# 9  Background short  Completed                   -   47622                 - [-   -    -]
#10  Background short  Completed                   -   46182                 - [-   -    -]
#11  Background short  Completed                   -   44623                 - [-   -    -]
#12  Background short  Completed                   -   43110                 - [-   -    -]
#13  Background short  Completed                   -   41646                 - [-   -    -]
#14  Background short  Completed                   -   40158                 - [-   -    -]
#15  Background short  Completed                   -   38694                 - [-   -    -]
#16  Background short  Completed                   -   37206                 - [-   -    -]
#17  Background short  Completed                   -   35766                 - [-   -    -]
#18  Background short  Completed                   -   34326                 - [-   -    -]
#19  Background short  Completed                   -   32886                 - [-   -    -]
#20  Background short  Completed                   -   31446                 - [-   -    -]

Long (extended) Self-test duration: 6 seconds [0.1 minutes]

Background scan results log
  Status: waiting until BMS interval timer expires
    Accumulated power on time, hours:minutes 53641:30 [3218490 minutes]
    Number of background scans performed: 268,  scan progress: 0.00%
    Number of background medium scans performed: 268


Protocol Specific port log page for SAS SSP
relative target port id = 1
  generation code = 5
  number of phys = 1
  phy identifier = 0
    attached device type: SAS or SATA device
    attached reason: unknown
    reason: unknown
    negotiated logical link rate: phy enabled; 6 Gbps
    attached initiator port: ssp=1 stp=1 smp=1
    attached target port: ssp=0 stp=0 smp=0
    SAS address = 0x5000cca26110f8a5
    attached SAS address = 0x5b8ca3a0efa70b01
    attached phy identifier = 5
    Invalid DWORD count = 8
    Running disparity error count = 0
    Loss of DWORD synchronization count = 2
    Phy reset problem count = 0
relative target port id = 2
  generation code = 5
  number of phys = 1
  phy identifier = 1
    attached device type: no device attached
    attached reason: unknown
    reason: power on
    negotiated logical link rate: phy enabled; unknown
    attached initiator port: ssp=0 stp=0 smp=0
    attached target port: ssp=0 stp=0 smp=0
    SAS address = 0x5000cca26110f8a6
    attached SAS address = 0x0
    attached phy identifier = 0
    Invalid DWORD count = 0
    Running disparity error count = 0
    Loss of DWORD synchronization count = 0
    Phy reset problem count = 0
Those two foreground tests never completed even after several days and frankly I don't think they are even running at all. Is this a firmware cosmetic bug too? A short smartctl test completes just fine. Am I wrong in exiting out of the terminal while the test is running in the "foreground"? Should I just wait for it (or use tmux/screen just like badblocks)?
 
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,568
1,674
113
49
HSV and SFO
SAS drives often do not have a full ATA cmd set present. You are not alone. I run badblock and check for defect growth and call it a day.
You do realize that the command set of SATA has a lot of SCSI (SAS) commands in it, right? If anything, SAS should have a better available command set than SATA.
 

klui

༺༻
Feb 3, 2019
919
527
93
My understanding of SAS and SATA is they are different protocols. While there is the provision of the SAS connector able to accept SATA drives SATA compatibility is optional, according to the Wikipedia page, by the controller. A benefit of SAS/SATA is the former is a superset of the latter's.

As I don't have a copy of the spec I'm not sure if SATA tunneling is actually optional or if every popular SAS vendor like Broadcom and Microsemi just include the compatibility layer making SATA a de facto subset. From a user perspective it definitely seems this way.