EXPIRED 50-65% health 3.84TB Netapp SAS SSD - ebay - $130

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jbv1982

Member
Nov 5, 2017
32
16
8
49
That makes me feel a bit better about not countering with $80 first, since he denied 75/9. Also that shipping is crazy.

two of them left, it went from 20 to 2 in less than 24 hours. I guess someone else saw my post.
 

foureight84

Well-Known Member
Jun 26, 2018
402
344
63
That makes me feel a bit better about not countering with $80 first, since he denied 75/9. Also that shipping is crazy.

two of them left, it went from 20 to 2 in less than 24 hours. I guess someone else saw my post.
I told my friend about the deal. He might have bought several.
 

foureight84

Well-Known Member
Jun 26, 2018
402
344
63
Got my hands on four of them from my buddy. They're 2 49% used, the other two 37% and 39% used. Got a fan blowing across them and an exhaust in path (Fractal Node 804 case). They are idling at 33C. I haven't put a load on them to see what peak temps look like. They also came attached to caddy as well.

EDIT: Putting moderate load on it by performing replication across vdevs. Highest temp is 39C. That's actually really good.
 
Last edited:
  • Like
Reactions: ca3y6

luckylinux

Well-Known Member
Mar 18, 2012
1,549
492
83
It feels like my Package got lost in the US. Fedex Tracking shows no Updates since 2025-06-10 :( .
 

jbv1982

Member
Nov 5, 2017
32
16
8
49
It feels like my Package got lost in the US. Fedex Tracking shows no Updates since 2025-06-10 :( .
Yeahhhhhh that’s a bit of a wait. I just had something sent normal post from Germany to the US and it only took 10 days total.
I hope you got yours.
The four I bought were all 50-55% life remaining. I’ve got seven of them in a RAIDZ-1 and holy shit they are fast.
 

foureight84

Well-Known Member
Jun 26, 2018
402
344
63
Yeahhhhhh that’s a bit of a wait. I just had something sent normal post from Germany to the US and it only took 10 days total.
I hope you got yours.
The four I bought were all 50-55% life remaining. I’ve got seven of them in a RAIDZ-1 and holy shit they are fast.
Using mine as ISCSI over 10gbe. 4 in RAIDZ-1. I was going to buy 8 of them but I kept 4 spinning disks just in case because I have 4 spares on hand.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,549
492
83
I was thinking Striped Mirrors for best Performance though. At the Price of 50% Store (in)Efficiency though. 2xRAIDZ-1 + 1 Hot Spare could also be an Option I guess.
 
  • Like
Reactions: foureight84

luckylinux

Well-Known Member
Mar 18, 2012
1,549
492
83
Just received my 9 Pieces Today after a Trip across the Ocean :).


Still unsure which ZFS Configuration to use though o_O. I must say 2 x 4-way Raidz1 looks temping (with 1 Spare) since I can double IOPS compared to a single VDEV, yet more space-efficient than 4 x 2-Way Mirrors. 21TB usable vs 14TB usable basically ...
 

itronin

Well-Known Member
Nov 24, 2018
1,425
966
113
Denver, Colorado
Still unsure which ZFS Configuration to use though o_O. I must say 2 x 4-way Raidz1 looks temping (with 1 Spare) since I can double IOPS compared to a single VDEV, yet more space-efficient than 4 x 2-Way Mirrors. 21TB usable vs 14TB usable basically ...
for me it would also depend on the use case - local storage - pump up the IOPS . Feed the beast.

network shared storage - I'd look at my max bandwidth, number of clients, and then the client use case. This method also presumes you have sufficient capacity for your needs regardless of how you construct the pool.

In my experience with typical and even a lot of synthetic but realistic (or trace-driven) workloads (not just bandwidth/iops testing) its hard to generate above 28-29Gbps.

There's an article floating around here (STH forums) somewhere that talks about the challenges of driving real world end-user and virt workloads above 40Gbps and trying to go even higher towards 100Gbps. <- I'm old I could be misremembering said article.

the only time I've gone above 35Gbps was a SAS2 12x2M of SSDs to another one doing a ZFS rep. Drive cap was 400Gb so not a lot of total capacity. it completed very very quickly. amusing first time. anti-climactic after that.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,549
492
83
for me it would also depend on the use case - local storage - pump up the IOPS . Feed the beast.

network shared storage - I'd look at my max bandwidth, number of clients, and then the client use case. This method also presumes you have sufficient capacity for your needs regardless of how you construct the pool.

In my experience with typical and even a lot of synthetic but realistic (or trace-driven) workloads (not just bandwidth/iops testing) its hard to generate above 28-29Gbps.

There's an article floating around here (STH forums) somewhere that talks about the challenges of driving real world end-user and virt workloads above 40Gbps and trying to go even higher towards 100Gbps. <- I'm old I could be misremembering said article.

the only time I've gone above 35Gbps was a SAS2 12x2M of SSDs to another one doing a ZFS rep. Drive cap was 400Gb so not a lot of total capacity. it completed very very quickly. amusing first time. anti-climactic after that.
And there the Issue is about reliability of Network Storage in a Homelab.

A long Time ago I did the NAPP-IT Thing with ZFS on ESXi and Local Storage, that worked well enough, but of course it was Local to the Server in Question.

If I were to do that over a real Network (and I have only 10gbps, although I'm considering making a DIY 25gbps Switch with a couple of Mellanox ConnectX-4 2 x QSFP28 NICs, and QSFP28 to 4 x SFP28 Breakout DAC) , I would require:
- Encryption: either NFS over Wireguard, SSHFS, or similar
- Reliability: I don't want all VMs on my Nodes to crash if the Storage Server crashes

The proper Way I guess would be to deploy CEPH with at least 3 Storage Nodes (funnily I have 3 x CSE-216 so LOTS of Room for SSDs) and probably at least 1 Disk Parity on each Node, but CEPH is quite complex from what I read (and should be on the RAW Disks or possibly the LUKS encrypted Devices), NOT on ZFS (or at least it isn't recommended). Furthermore that would waste A LOT of Space ...

Although probably the encryption alone is going to kill the Performance quite a bit. Maybe adding an Intel Quick Assist Accelerator, if I can find a good Deal on eBay ? Unsure ...
 

Cruzader

Well-Known Member
Jan 1, 2021
948
937
93
If I were to do that over a real Network (and I have only 10gbps, although I'm considering making a DIY 25gbps Switch with a couple of Mellanox ConnectX-4 2 x QSFP28 NICs, and QSFP28 to 4 x SFP28 Breakout DAC) , I would require:
Mellanox in general do not support breakout from nics only switches.
(Not sure why you are looking at that approach at all instead of just buying a used switch tho.)

As for ceph how much space that is "wasted" is primarily upto you, the proper way would also be more than 3 nodes.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,549
492
83
Mellanox in general do not support breakout from nics only switches.
Really ? There is such a Limitation in Place ?

Meaning I cannot use the Mellanox ConnectX-4 in Switchdev Mode with e.g. OpenWRT and a couple QSFP28 to 4xSFP28 Adapters ?

(Not sure why you are looking at that approach at all instead of just buying a used switch tho.)
Cost. Power Consumption. Noise.

As for ceph how much space that is "wasted" is primarily upto you, the proper way would also be more than 3 nodes.
Which is only making Matters worse because I would have an "Asymmetric Setup" (few big Storage Nodes, lots of Computing Nodes), which I agree, from my limited understanding of Ceph, is NOT the recommended Approach.
 

Cruzader

Well-Known Member
Jan 1, 2021
948
937
93
Really ? There is such a Limitation in Place ?
It is normally just done from the switching side to give you flexibility on port use, very few nics support this.

Meaning I cannot use the Mellanox ConnectX-4 in Switchdev Mode with e.g. OpenWRT and a couple QSFP28 to 4xSFP28 Adapters ?
The general answer from mellanox is that they only offer it from switches and that their cards/adapters do not have the capability, so i would not expect that to work.

Cost. Power Consumption. Noise.
Id expect higher power consumption not lower and not really any difference in noise.
Cost id not really expect to be much lower either tbh, id expect the performance to be lower and latency higher tho.

Which is only making Matters worse because I would have an "Asymmetric Setup" (few big Storage Nodes, lots of Computing Nodes), which I agree, from my limited understanding of Ceph, is NOT the recommended Approach.
4-5 nodes is a fairly common small scale initial deployment, so you can have a node go down (either unplanned or for maintenance) and have atleast 3 active for quorum.
Otherwise if you fall down to 2 you pretty much have to go read only intil you got the third back up.