32gb optane modules

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Prophes0r

Active Member
Sep 23, 2023
159
202
43
East Coast, USA
speed is not that bad
Make sure when using CrystalDiskMark to go to Profile and select Real World Performance.
You can also select the mixed workload for a better idea.

The "default" options are half useless.
Normal users, even power users with a HomeLab running a dozen VMs and containers, aren't going to go into high queue depths.
Those Q8 and Q32 tests are not just worthless, they are deceptive marketing junk.

Note: One of the things that Optane drives have that modern NAND flash drives still can't compete with is INSANELY low latency.
These M10 drives are first generation Optane, so they aren't THAT great, but should still beat NAND with no problem.
 
  • Like
Reactions: abq

nandEater

Member
Oct 13, 2025
38
34
18
Make sure when using CrystalDiskMark to go to Profile and select Real World Performance.
You can also select the mixed workload for a better idea.

The "default" options are half useless.
Normal users, even power users with a HomeLab running a dozen VMs and containers, aren't going to go into high queue depths.
Those Q8 and Q32 tests are not just worthless, they are deceptive marketing junk.

Note: One of the things that Optane drives have that modern NAND flash drives still can't compete with is INSANELY low latency.
These M10 drives are first generation Optane, so they aren't THAT great, but should still beat NAND with no problem.
1762006787639.png

This is with real word performance. I'm using a usb c nvme drive enclosure. I need to try in a regular m.2 slot. Not sure why rnd4k is 7492
 
  • Like
Reactions: abq

Prophes0r

Active Member
Sep 23, 2023
159
202
43
East Coast, USA
I'm using a usb c nvme drive enclosure.
That probably has something to do with it.
My latency is 95µs-ish if I remember right.
My IOPS are closer to 15k-20k though. You don't tend to see the rated 1mil IOPS unless you are going into insane Queue and threading depths.
Even if the device can actually do them the bottleneck becomes the OS, software, and even CPU at some point.
My 4k random read speeds are 150-200MB/s.

That's on the 16GB drives which are slightly less performant.
 

Mithril

Active Member
Sep 13, 2019
468
160
43
Make sure when using CrystalDiskMark to go to Profile and select Real World Performance.
You can also select the mixed workload for a better idea.

The "default" options are half useless.
Normal users, even power users with a HomeLab running a dozen VMs and containers, aren't going to go into high queue depths.
Those Q8 and Q32 tests are not just worthless, they are deceptive marketing junk.

Note: One of the things that Optane drives have that modern NAND flash drives still can't compete with is INSANELY low latency.
These M10 drives are first generation Optane, so they aren't THAT great, but should still beat NAND with no problem.

Another thing that optane tends to be really good at: Not caring that the workload is reading and writing. A LOT of NAND drives, even enterprise, not only perform much worse than label in typical workloads and small I/O (your worst case 4k read/write) will ALSO struggle more with *combined* constant read and write.

So by either using namespaces or manual partitioning and ZFS commands, a single (or mirror pair) optane can be your SLOG and your metadata vdev. Metadata vdevs can make a spinning rust ZFS feel WAY snappier than it is (directory listing etc).
 

etorix

Active Member
Sep 28, 2021
189
101
43
by either using namespaces or manual partitioning and ZFS commands, a single (or mirror pair) optane can be your SLOG and your metadata vdev.
This is going off topic, as you definitively want a 900p or DC P4800X for that.
And I would not recommend mixing SLOG and metadata vdev on a single device. SLOG and L2ARC are fine (on Optane and on Optane only): None is pool critical, and both can be removed if needed.
Metadata/dedup/special vdev IS pool critical, requires redundancy and cannot be removed if there is any form of raidz# in the pool (which is usually the case with a special vdev). Do not mix that role with anything else.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,385
1,028
113
Stavanger, Norway
intellistream.ai
Actually, I think it makes perfect sense to have both SLOG and L2ARC on a single device. NVMe ssds are way to fast for ZFS.
There is also a pull request for ZFS where they want to enable the SLOG for the Special Metadata devices
 
Last edited:
  • Like
Reactions: abq

jamesdwi

New Member
Oct 8, 2023
11
4
3
i bought some to use as slog devices, should be nice for proxmox hosts that are using block devices for vm's. But of course that weekend I got really lucky, I think i was lucky and found some 64GB optane drives, and a pair of 118GB optane drives, now need to test to see which size goes best, thinking throwing a 32GB drive with a 8TB SATA or SAS hard drive, but debating if the 64GB would be better, this home lab and should be relatively light use. plus still debating if mirrored 64GB or would mirrored 118GB drives be better as a special meta vdev.

Any one have any wisdom on this topic, would any of these drives go bettter with a raidz2 pool holding 7x 22GB sata hard drives?
 

nexox

Well-Known Member
May 3, 2023
1,823
881
113
debating if mirrored 64GB or would mirrored 118GB drives be better as a special meta vdev.
I can't help with details of ZFS implementation, but there are two different 118GB Optane models, the 800p, which is similar performance to the 64GB, and the p1600x, which is significantly faster and higher longevity, and I think would probably be wasted on a lightly-used filesystem.
 

etorix

Active Member
Sep 28, 2021
189
101
43
throwing a 32GB drive with a 8TB SATA or SAS hard drive, but debating if the 64GB would be better
SLOG size is not related to pool size, only to network speed: By default ZFS will log up to two transaction groups worth of incoming sync writes before everything has to be commited to storage. 10 s @ 1 Gb/s = 1.2 GB. 10 s @ 10 Gb/s = 12 GB
The 32 GB M10 is already oversized, but you'd want to use the 64 GB as SLOG instead for the sake of higher endurance, and a 118 GB is probably better for the further sake of throughput (and possibly even latency if it is a P1600X).

A SLOG does not need to be mirrorred, unless the system really is business-critical (as in: not properly accounting a banking transaction could land the administrator in jail—and not one of the BSD style).
Special vdev needs to be (triple-) mirrorred. See the TrueNAS forum for guidance on sizing… and whether you really, really, want to go this way.
 

reasonsandreasons

Active Member
May 16, 2022
167
116
43
My understanding is that your special vdev should be as redundant as your pool as it's a load-bearing member, so presumably the triple-mirrored suggestion is if you were using RAIDz2. If you're using a pool of mirrors or RAIDz1 you're probably okay with a simple mirror.
 
  • Like
Reactions: etorix

ano

Well-Known Member
Nov 7, 2022
731
324
63
on large arrays, say z2, I would definetly go triple mirror specia, dev

if special goes away, its poop

new 2.4.0 looks promising, sort of combining slog into special if you have special
 

mvs123

New Member
Jul 28, 2024
11
1
3
Munich, Germany
In theory having the same redundancy level is most rational. Real life is a bit more complicated.
SSDs and HDDs have different failure modes, different failure probabilities, need different amount of time to rebuild.

In many cases RAIDz2 HDD Pool + mirrored special vdev on two server grade SSDs should be ok.
Just look that your SSDs are not from the same batch and use grade. Firmware bugs, that brick SSDs at specific power-on count are not that rare.