shipito ?Does anybody know if there are similar good deals on these drives in Europe?
shipito ?Does anybody know if there are similar good deals on these drives in Europe?
I use it for ZFS boot drive for a bare-metal OPNsense setup, works fine. Have another two that I will be using in for ZFS RAID1 for a Proxmox install next week, don't think there'll be any issues.Seems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.
Has anyone used it for that purpose?
Could also just use a 10 - 20$ S3700 100gb or 200gb tooSeems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.
Has anyone used it for that purpose?
Tiny SFF systems can't fit a 2.5” drive thoughCould also just use a 10 - 20$ S3700 100gb or 200gb too![]()
I'm sure M900 or M700/720s canTiny SFF systems can't fit a 2.5” drive though![]()
Never heard/seen this fable. We tested ZFS on root as much as XFS (MDRAID) on root for Proxmox. Not 1 of the drives died in years through several Proxmox versions. Proxmox is /was always on the rumor to kill boot drives, but no one was able to prove it with evidence and/or numbers. So must be some competitor who got a bunch of bad drives I suppose. But these rumors dried out, so only forums & Reddit still keep them alive in Google search.Seems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.
Unfortunately not with a 10G card in the PCIe slot. That’s where the sata caddy normally goes.I'm sure M900 or M700/720s can
Never heard/seen this fable. We tested ZFS on root as much as XFS (MDRAID) on root for Proxmox. Not 1 of the drives died in years through several Proxmox versions. Proxmox is /was always on the rumor to kill boot drives, but no one was able to prove it with evidence and/or numbers. So must be some competitor who got a bunch of bad drives I suppose. But these rumors dried out, so only forums & Reddit still keep them alive in Google search.
I thought I read Tiny SFF in your post and have not seen any 10G and PCIe mentioningUnfortunately not with a 10G card in the PCIe slot. That’s where the sata caddy normally goes.
That's beginner luck if you cluster with Proxmox and don't mount /var somewhere else on an external pool or symlink other important *cfg/*log from /etc/pve/ to another pool. Also it's not Proxmox what is busy in logging in a cluster, but widely used CEPH and that may cause issues. But it's not Proxmox.You can very clearly see examples of Proxmox creating so many writes it wears out ssd’s. There are a lot of log activities when clustering
The Lenovo tiny SFF's like the M920x, PXX, M90q can all take a PCIe card and 2 x M.2'sI thought I read Tiny SFF in your post and have not seen any 10G and PCIe mentioning
Proxmox clustering can be done without ceph and it actually uses the Corosync Cluster Engine.That's beginner luck if you cluster with Proxmox and don't mount /var somewhere else on an external pool. Also it's not Proxmox what is busy in logging in a cluster, but widely used CEPH and that may cause issues. But it's not Proxmox.
But the thread is about Optanes ... Lets move one.
That's what Wendell from Level1Techs has been suggesting. I believe he got 4 and put them in a striped mirror, though.Do you guys think it would make sense to mirror the two 58GB drives and assign them as metadata vdev to a ZFS pool (performance wise)? Curious, I also couldn't find the option in TrueNAS's interface to also keep the metadata on the pool itself in case I need to remove them. The last time I tried this, the option to remove them was greyed out (couldn't uncheck it).
Any before \a fter performance comparisons ?That's what Wendell from Level1Techs has been suggesting. I believe he got 4 and put them in a striped mirror, though.
I don't know if this has changed with recent versions of ZFS, but at some point you were not able to undo adding a special metadata device to a pool.
Special metadata vdev can only be removed if your pool consists of mirrored or single vdevs. A single RaidZ vdev in your pool makes it impossible to remove other vdevs.Do you guys think it would make sense to mirror the two 58GB drives and assign them as metadata vdev to a ZFS pool (performance wise)? Curious, I also couldn't find the option in TrueNAS's interface to also keep the metadata on the pool itself in case I need to remove them. The last time I tried this, the option to remove them was greyed out (couldn't uncheck it).
Dang I don't see that, is it already expired?
Expired. It was $60. I can confirm that.Dang I don't see that, is it already expired?
Well that's a bummer, I was waiting for a deal to buy three more of these to try out on the test bench.Expired. It was $60. I can confirm that.
*sigh* deal would have worked. limit was three on an order - which was all I need though I tried to buy more and no go.Well that's a bummer, I was waiting for a deal to buy three more of these to try out on the test bench.
This deal will be back. If they're having to discount like this it's because they're just not selling the things. Most folks see $75+ for a 118GB SSD and think "WTH I can get a 1tb gen 4 SSD for that!" not realizing what this is.*sigh* deal would have worked. limit was three on an order - which was all I need though I tried to buy more and no go.