Optane P1600X 58GB $40

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Seems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.

Has anyone used it for that purpose?
 

juma

Member
Apr 14, 2021
64
34
18
Seems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.

Has anyone used it for that purpose?
I use it for ZFS boot drive for a bare-metal OPNsense setup, works fine. Have another two that I will be using in for ZFS RAID1 for a Proxmox install next week, don't think there'll be any issues.
 
  • Like
Reactions: T_Minus

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
Tiny SFF systems can't fit a 2.5” drive though :(
I'm sure M900 or M700/720s can

Seems perfect as the OS disk for a ZFS based Proxmox install as it has a tendency to destroy consumer SSD's.
Never heard/seen this fable. We tested ZFS on root as much as XFS (MDRAID) on root for Proxmox. Not 1 of the drives died in years through several Proxmox versions. Proxmox is /was always on the rumor to kill boot drives, but no one was able to prove it with evidence and/or numbers. So must be some competitor who got a bunch of bad drives I suppose. But these rumors dried out, so only forums & Reddit still keep them alive in Google search.
 
Last edited:
  • Like
Reactions: RolloZ170

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
I'm sure M900 or M700/720s can


Never heard/seen this fable. We tested ZFS on root as much as XFS (MDRAID) on root for Proxmox. Not 1 of the drives died in years through several Proxmox versions. Proxmox is /was always on the rumor to kill boot drives, but no one was able to prove it with evidence and/or numbers. So must be some competitor who got a bunch of bad drives I suppose. But these rumors dried out, so only forums & Reddit still keep them alive in Google search.
Unfortunately not with a 10G card in the PCIe slot. That’s where the sata caddy normally goes.

You can very clearly see examples of Proxmox creating so many writes it wears out ssd’s. There are a lot of log activities when clustering
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
Unfortunately not with a 10G card in the PCIe slot. That’s where the sata caddy normally goes.
I thought I read Tiny SFF in your post and have not seen any 10G and PCIe mentioning
You can very clearly see examples of Proxmox creating so many writes it wears out ssd’s. There are a lot of log activities when clustering
That's beginner luck if you cluster with Proxmox and don't mount /var somewhere else on an external pool or symlink other important *cfg/*log from /etc/pve/ to another pool. Also it's not Proxmox what is busy in logging in a cluster, but widely used CEPH and that may cause issues. But it's not Proxmox.

But the thread is about Optanes ... Lets move one.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
I thought I read Tiny SFF in your post and have not seen any 10G and PCIe mentioning
The Lenovo tiny SFF's like the M920x, PXX, M90q can all take a PCIe card and 2 x M.2's
So they can fit a 10G NIC, an optane as the boot drive and then a "normal" nvme as the datastore

That's beginner luck if you cluster with Proxmox and don't mount /var somewhere else on an external pool. Also it's not Proxmox what is busy in logging in a cluster, but widely used CEPH and that may cause issues. But it's not Proxmox.

But the thread is about Optanes ... Lets move one.
Proxmox clustering can be done without ceph and it actually uses the Corosync Cluster Engine.

But anyways...let's move on like you said :)
 

foureight84

Active Member
Jun 26, 2018
266
240
43
Do you guys think it would make sense to mirror the two 58GB drives and assign them as metadata vdev to a ZFS pool (performance wise)? Curious, I also couldn't find the option in TrueNAS's interface to also keep the metadata on the pool itself in case I need to remove them. The last time I tried this, the option to remove them was greyed out (couldn't uncheck it).
 

esses

New Member
Mar 12, 2018
16
4
3
48
RMS200 definitely runs very hot. Had to solder a new fan into mine.

Unfortunately, it freezes POST on my H12SSi mobo at DXE - Detect PCI devices. I got the entire system to boot with it attached maybe once?
 

FlorianZ

Active Member
Dec 10, 2019
173
220
43
Do you guys think it would make sense to mirror the two 58GB drives and assign them as metadata vdev to a ZFS pool (performance wise)? Curious, I also couldn't find the option in TrueNAS's interface to also keep the metadata on the pool itself in case I need to remove them. The last time I tried this, the option to remove them was greyed out (couldn't uncheck it).
That's what Wendell from Level1Techs has been suggesting. I believe he got 4 and put them in a striped mirror, though.

I don't know if this has changed with recent versions of ZFS, but at some point you were not able to undo adding a special metadata device to a pool.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
That's what Wendell from Level1Techs has been suggesting. I believe he got 4 and put them in a striped mirror, though.

I don't know if this has changed with recent versions of ZFS, but at some point you were not able to undo adding a special metadata device to a pool.
Any before \a fter performance comparisons ?
 
  • Like
Reactions: Weapon

arnbju

New Member
Mar 13, 2013
26
11
3
Do you guys think it would make sense to mirror the two 58GB drives and assign them as metadata vdev to a ZFS pool (performance wise)? Curious, I also couldn't find the option in TrueNAS's interface to also keep the metadata on the pool itself in case I need to remove them. The last time I tried this, the option to remove them was greyed out (couldn't uncheck it).
Special metadata vdev can only be removed if your pool consists of mirrored or single vdevs. A single RaidZ vdev in your pool makes it impossible to remove other vdevs.
 

nutsnax

Active Member
Nov 6, 2014
247
92
28
113
*sigh* deal would have worked. limit was three on an order - which was all I need though I tried to buy more and no go.
This deal will be back. If they're having to discount like this it's because they're just not selling the things. Most folks see $75+ for a 118GB SSD and think "WTH I can get a 1tb gen 4 SSD for that!" not realizing what this is.