EXPIRED Used HGST 3.82TB U.2 SSD. Posted $135(updated). Accepts lower.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Prophes0r

Member
Sep 23, 2023
73
64
18
East Coast, USA
HGST HUSPR3238ADP301 3.82TB / 3820GB 2.5" PCIe U.2 SSD Solid State Drive Grade A | eBay

I offered $115 each for 2x. Accepted.

I'm considering going back for 2x more...

Datasheet
PCIe 3.0 x4.
5.5PBW(0.8 Drive Writes/day)

I'm starting to see a bunch of these ~4TB U.2 drives on eBay stating 70-85% remaining use, which is PLENTY for a Lab.
This one's price started lower than what I was going to offer some of the other ones.

As long as it has 50% remaining use I'll be more than happy.
This is less than the cost of a new 2TB M.2 drive with DRAM that will come rated for 500TB written.

NOTE: Can someone explain how to get the little boxed eBay preview in a post here? I couldn't figure it out.
 
Last edited:

bwahaha

Active Member
Jun 9, 2023
117
73
28

dunno. I hit the preview button first, before posting this reply. Maybe that has something to do with it?
Made a kind of lowball offer, seeing as I don't have anything to put them in.
 
  • Like
Reactions: Samir

josh

Active Member
Oct 21, 2013
621
196
43
Would love to buy a bunch but any recommendations on a high density cloud server option for U2 drives like the C6320 series? Can I just buy a C6400 off ebay and change the backplane to the nvme one?
 
Last edited:

ca3y6

Member
Apr 3, 2021
86
44
18
Good find. I have automated searches on ebay but not for that odd disk size (3.82TB vs the usual 3.84TB)
 
  • Like
Reactions: nexox

gb00s

Well-Known Member
Jul 25, 2018
1,253
667
113
Poland
There are ppl using boards with up to 4-6 PCI 16x slots and use pci cards that can take 4x nvme's each with bifurcation. Gives u 16+ drives w/o a backplane.

From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid.
 
  • Like
Reactions: ghxst

ca3y6

Member
Apr 3, 2021
86
44
18
From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid.
How terrible? I had a similar bad experience with PM963, tops around 1GB/s sequential write. Might as well be a SAS drive for that speed. Read is about twice as fast.
 

Prophes0r

Member
Sep 23, 2023
73
64
18
East Coast, USA
From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine.
How bad are we talking?

I'm just about to replace my current 2x NAS (z2 4TBx10 each) with a new one(z2 10TBx12).
I was planning on using some NVMe drives as a Special Metadata VDEV to speed up media library scans and torrent access times.
I have 4x 1TB 970 Evos for a 2x2, but I was nervous that they would fill up too fast.
(I also have 5x 118GB Optane drives from that last fire sale. But I'm CERTAIN those are going to be too small...probably.)

Special Metadata is mostly about read performance and latency/IOPS anyway though right?
 

ca3y6

Member
Apr 3, 2021
86
44
18
I would think metadata is more about IOPS than sequential read/write speed. I am looking at my zfs special vdev which is a backup zfs pool, and the nvme are hardly doing any writes.
 
  • Like
Reactions: BackupProphet

josh

Active Member
Oct 21, 2013
621
196
43
There are ppl using boards with up to 4-6 PCI 16x slots and use pci cards that can take 4x nvme's each with bifurcation. Gives u 16+ drives w/o a backplane.

From my experience, these specific drives have terrible write performance. If you just want to use them as part of a file server that is usually used for reading files only, fine. But other than that, avoid.
Trying to replace a multi C6220 Ceph cluster so looking for density in the node area. 4-6x slot servers would be quite the bump up in rack space. Can you point me to one of those 4x NVMe cards btw? The ones I found have no cooling and I feel like even if I manage to fit all of them in one server they will burn out in no time
 
Last edited:

Prophes0r

Member
Sep 23, 2023
73
64
18
East Coast, USA
I just realized a potential issue for my plan.

The datasheet lists power usage as 25w active / 8w idle.
I was planning on using 2x of these directly on a dual U.2 PCIe 8x card like...
...since I know there have been issues using cheap cables, and shorter distances are better.
(I believe the issue is more with PCIe 4.0 though. Still, less cables...)

But the power budget on an x8 slot is 25w...
 

nexox

Well-Known Member
May 3, 2023
1,271
587
113
I believe those dual U.2 adapters exist with aux power connections, but you'll still have an issue cooling them because there's no great way to get the airflow moving the right direction (note those holes in the ends of the drive, air probably needs to go through those.) I have had fine results with cables on PCIe 3.0, but past two of them it started getting really difficult to route them and convince them to stay attached to the drives.
 

Prophes0r

Member
Sep 23, 2023
73
64
18
East Coast, USA
I believe those dual U.2 adapters exist with aux power connections...
I've literally been searching Aliexpress for over an hour and haven't found any with external power.

...but you'll still have an issue cooling them because there's no great way to get the airflow moving the right direction...
If I did manage to find a card with external power, I'd have to figure something out. Even if it meant taking the cases off and/or drilling more vent holes in them.

...but past two of them it started getting really difficult to route them and convince them to stay attached to the drives.
So I simultaneously have too much room, and not enough room.
Those are 25" deep chassis. Enough to give me a full 7" depth between the edge of an H11SSL-i Motherboard and the fan on the back of the hot-swap enclosure.
On the other hand, There isn't anywhere else to mount more drives unless/until I rig up an internal cage to mount 2.5" drives.
(I had also planned to add 4x more 10TB drives over the next 18-24 months, or 4x 2TB SATA SSDs and I didn't have a real plan where to mount those either. And I don't need the SATA SSDs now that I'm buying U.2 anyway.)


I'm kinda in panic mode right now. I had the old plan stuff in my Ali cart. I wasn't expecting those drives to be here in like, 36 more hours.
I don't actually have a way to test them unless I pull the one M.2-to-Sff-8639 adapter I own that is currently connecting the Optane system drive on this machine.

I gotta buy SOMETHING and get it moving, but I hate-Hate-HATE buying single-use/disposable solutions.
I don't want to have to keep buying different cables for the same drives at $25 a pop because I changed from a simple slot-to-port adapter, to a PCIe switch card or even a tri-mode HBA like a 9400-8i or something.

AArrgggg. I hate being on a "student budget"...