EXPIRED Used HGST 3.82TB U.2 SSD. Posted $135(updated). Accepts lower.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Oct 20, 2021
54
49
18
I can vouch for the company, these drives were likely just removed datacenter drives and will probably be in great health. These guys are located in MN and often have great deals on good tech. I've bought a handful of things from them. I would buy some of these, but I need to stay my hand at buying stuff just because it is a good deal!
 
  • Like
Reactions: chrgrose

Prophes0r

Active Member
Sep 23, 2023
152
186
43
East Coast, USA
Okay...plan time.

Option A
Option B
Option C
Option D
  • $65 - PCI-E x16 to 4x U.2 SFF-8639 Expansion Card with SATA power port
  • Pros
    • x16 is an excuse to buy 2 more drives.
    • should get here in 10 days - Cutting my return period on testing those drives pretty close
    • sketchy SATA power cables aside, that's 5A more power budget. 75w + ~50w is much more comfortable.
    • Don't have to find a mounting solution
  • Cons
    • Most expensive Option
    • Will always take up a x16 slot. MIGHT be able to use it with 2 drives in an x8 slot with 25w + ~50w. Might not work.
    • Have to figure out a cooling solution.
 

ca3y6

Well-Known Member
Apr 3, 2021
390
293
63
For mounting the drives, I just 3d printed two drive holders that I screw the drives to, that basically separate the drives from each others to let the air flow between them. Not convinced it really matters which direction the airflow goes, those drives typically have an all metal case and will be sitting right between the motherboard and the main chassis fans, so getting a lot more airflow than they would in a 2.5 bay. I am more concerned about the impact of obstructing the airflow to the motherboard. Also a bit concerned about the force of those thick SFF8643 cables on the u.2 connector if the case becomes too crammed.

holder.png
 
Last edited:

Prophes0r

Active Member
Sep 23, 2023
152
186
43
East Coast, USA
For mounting the drives, I just 3d printed two drive holders that I screw the drives to, that basically separate the drives from each others to let the air flow between them. Not convinced it really matters which direction the airflow goes, those drives typically have an all metal case and will be sitting right between the motherboard and the main chassis fans, so getting a lot more airflow than they would in a 2.5 bay. I am more concerned about the impact of obstructing the airflow to the motherboard. Also a bit concerned about the force of those thick SFF8643 cables on the u.2 connector if the case becomes too crammed.

View attachment 39486
I...object to 3d printing what is essentially a flat plate with holes drilled in it.

You can grab some Aluminum flashing pieces for $1 from [insert hardware store] and drill some holes with the right spacing.


I'm leaning towards the powered 4x U.2 adapter card myself...
I'm going to wait till tomorrow though. It's too late to be making $$ decisions.
 

Cruzader

Well-Known Member
Jan 1, 2021
853
857
93
I...object to 3d printing what is essentially a flat plate with holes drilled in it.

You can grab some Aluminum flashing pieces for $1 from [insert hardware store] and drill some holes with the right spacing.
Drawing and printing it it would take me less time than going to buy the aluminium.
 
  • Like
Reactions: EasyRhino

BackupProphet

Well-Known Member
Jul 2, 2014
1,287
852
113
Stavanger, Norway
intellistream.ai
I would think metadata is more about IOPS than sequential read/write speed. I am looking at my zfs special vdev which is a backup zfs pool, and the nvme are hardly doing any writes.
This! Special metadata devices are about balancing io operations between HDD and SSD's. You do not need much seq read/write. You will be fine with SATA ssd's. But latency is still important, so NVME is better.
 

Prophes0r

Active Member
Sep 23, 2023
152
186
43
East Coast, USA
I ordered 2x more.

The first 2 just got here.
I'll post the % used on my drives when I have time today.

I ended up getting that 4x card mounted solution with the external power connector, but it won't be here till November.
To test these drives, I'll need to use the ONLY U.2 to M.2 adapter I have that is currently connecting the P905 Optane drive on this desktop. Bleh.
 

EasyRhino

Well-Known Member
Aug 6, 2019
615
518
93
man, I was this because it sounds fun and a good deal, but TBH I have a 3.2TB SAS SSD that is already more than I need.
 
  • Like
Reactions: nexox

Prophes0r

Active Member
Sep 23, 2023
152
186
43
East Coast, USA
I want a cheap plx bifurcation card
STH Forum post on some cards.

Note: A PLX switch and Bifurcation are the opposite solutions.
Bifurcation Splits up the current lanes into smaller links.
A switch creates new links from a slot's larger single link.

If you want a card for U.2, and want to split an x8 slot into 4x 4 lane links, the CEACENT ANU24PE08 is relatively cheap at $70 USD.

If you want cheaper (and lower performance), This Card will split a PCIe 3.0 x2 slot into 4x 1 lane links for ~$40 USD. M.2 connector though, so different use case. Make sure to pick the "16GTS PCI-E X4" one. You don't want the PCIe 2.0 ones, and the other PCIe 3.0 ones are expensive enough that you have better options.

If you want REALLY cheap, This card splits a single PCIe 2.0 x1 slot into 2x 1 lane links on M.2 for ~18 USD. Not worth it except in very specific cases. Maybe great to use a 2 drive mirror for booting if you have a spare x1 slot you aren't using, on a board with limited other connectivity?
 
Last edited:

pacmancat

New Member
Jan 18, 2022
5
4
3
The card version is also available at 110$/offer.
Month old listing with 2 price drops and just 1 sold, put a 60$/ea offer on some.
For the PCIe card version, vendor accepted 2 @ $75 each. We'll see what data I'll get out of them on receipt.
I followed ccie4526's lead and offered $75/ea for a few of the HHHL PCIe versions. They arrived today, and show 7 years of power-on hours and ~130TB of writes. Considering that they're read-intensive but still rated for ~5.5PB of lifetime writes, I'd consider that a hell of a decent deal.

They're idling at around 65°C in a desktop chassis with less-than-great airflow, with bursty writes spiking them up close to 80° ...considering that they idle at 9.5W, read at 19.5W and write between 20-25W, that doesn't seem awful. Datasheet shows them throttle at 90°C and shut down at 95°, so I'll probably rig up an extra fan to keep them from roasting themselves if I don't end up moving them to a racked machine.

There were about 20 left in the listing when I ordered mine, and I just checked now--sold out. Hopefully it was the STH Effect™ and y'all got in on it. Honestly, I probably would have grabbed a few more at that price, just because.
 

ca3y6

Well-Known Member
Apr 3, 2021
390
293
63
The problem with HHHL is that they take a full PCIe slot, and those precious PCIe lanes would be better used with bifurcation to multiple U.2 drives. So personally, as much as I like a good deal, I couldn't justify buying one of those.
 

TRACKER

Active Member
Jan 14, 2019
301
134
43
I followed ccie4526's lead and offered $75/ea for a few of the HHHL PCIe versions. They arrived today, and show 7 years of power-on hours and ~130TB of writes. Considering that they're read-intensive but still rated for ~5.5PB of lifetime writes, I'd consider that a hell of a decent deal.

They're idling at around 65°C in a desktop chassis with less-than-great airflow, with bursty writes spiking them up close to 80° ...considering that they idle at 9.5W, read at 19.5W and write between 20-25W, that doesn't seem awful. Datasheet shows them throttle at 90°C and shut down at 95°, so I'll probably rig up an extra fan to keep them from roasting themselves if I don't end up moving them to a racked machine.

There were about 20 left in the listing when I ordered mine, and I just checked now--sold out. Hopefully it was the STH Effect™ and y'all got in on it. Honestly, I probably would have grabbed a few more at that price, just because.
that's hot! you need to think about some kind of cooling for the drive(s).
 

Prophes0r

Active Member
Sep 23, 2023
152
186
43
East Coast, USA
By the way.

The 3x working ones were DEFINITELY used for "read intensive" workloads.
Code:
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    1%
Data Units Read:                    109,454,739,956 [56.0 PB]
Data Units Written:                 577,844,157 [295 TB]
Host Read Commands:                 88,021,380,192
Host Write Commands:                653,025,118
...
Power On Hours:                     42,956
 
  • Like
Reactions: nexox

ccie4526

Active Member
Jan 25, 2021
186
136
43
By the way.

The 3x working ones were DEFINITELY used for "read intensive" workloads.
Code:
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    1%
Data Units Read:                    109,454,739,956 [56.0 PB]
Data Units Written:                 577,844,157 [295 TB]
Host Read Commands:                 88,021,380,192
Host Write Commands:                653,025,118
...
Power On Hours:                     42,956
Mine is running a little warm but yeah, definitely read-intensive workload.
CrystalDiskInfo_20241026073811.png