EU deprecated - NVDimm package sale - ultimate slog (with caveats)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Please see
for a consolidated thread
This one is kept for reference and discussions.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I have a number of complete NVDimm sets to sell - containing
-a 16 GB Micron DDR4 2666 NVDimm (MTA18ASF2G72PF1Z-2G6V21AB) - used to be on the Supermicro compatibility list before the module went EOL
-a matching PowerGem
- connectivity cable
-shipping within the EU

Everything except the cable is used. The NVDimm modules have been updated to the latest Firmware.

I am looking to get €250 each at this point - you might be able to get the module (and even Power Gem) online for cheaper, but the cables are difficult. O/C you can make you own. Yes, its a bit more expensive but I am trying to get a little bit back for the (very very) long hours I spent in getting this going :)
Note you'll be getting an invoice with VAT on this one, but no warranty will be provided (sold as defective). O/c I will help you to get these to run as good as I can :)
And o/c I am always willing to negotiate or provide discounts on multiple items or combos with my other stuff (Scalable system to go with these anyone?;) )

Which brings us to the caveats:

NVDimms dont run on just any board. Officially they are (used to be) supported on SM X10 Dual boards but I never managed to get them to run. Might need to try again on my X10DRi, but did not have time.
They run fine (for the most part) on Scalable boards (tried SM X11DPI as well & Intel) except that Firmware updating is an issue. O/c if your handy with Bios Modding you can enable the necessary options and get that done simpler - i was not aware so took the hard road.
They might run on AMD too as they at least claim compatibilty in the manual but never tested.

I run now a pair as Slogs on my X11SPH-nCTPF (one per pool). Originally I ran those with 128G RDimms, but 8 and 1ranked modules dont match well. I now run them with 4 64G modules (2R) and there are no issues whatsoever.
I also run the newest firmware by now, which is compatible again

Also I was *not* able to get those seen by ESXi - it detects them but they don't get registered as persistent memory - I searched and asked and finally gave up and went HW again. Note I tried up to 7 U1.
I tried convincing Supermicro to support them in ESXi but they refused.

Its working fine on Linux and FreeNAS, I have tested persistence on both and it was fine, so i trust them.
They have been seen on Windows too, but have not tested persistence on that platform

Code:
dmesg |grep -i nvdimm
nvdimm_root0: <NVDIMM root> on acpi0
nvdimm0: <NVDIMM region 16GB interleave 1> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
pmem0: <PMEM region 16GB> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
Which brings us too the goodies - if you get them to run they are *fast*

Code:
diskinfo -citvwS /dev/pmem0

512 # sectorsize
17179865088 # mediasize in bytes (16G)
33554424 # mediasize in sectors
0 # stripesize
0 # stripeoffset
PMEM region 16GB # Disk descr.
9548ADD1D6FC0231 # Disk ident.
No # TRIM/UNMAP support
0 # Rotation rate in RPM

I/O command overhead:
time to read 10MB block 0.002227 sec = 0.000 msec/sector
time to read 20480 sectors 0.026084 sec = 0.001 msec/sector
calculated command overhead = 0.001 msec/sector

Seek times:
Full stroke: 250 iter in 0.000439 sec = 0.002 msec
Half stroke: 250 iter in 0.000425 sec = 0.002 msec
Quarter stroke: 500 iter in 0.000830 sec = 0.002 msec
Short forward: 400 iter in 0.000622 sec = 0.002 msec
Short backward: 400 iter in 0.000692 sec = 0.002 msec
Seq outer: 2048 iter in 0.002606 sec = 0.001 msec
Seq inner: 2048 iter in 0.002542 sec = 0.001 msec

Transfer rates:
outside: 102400 kbytes in 0.014434 sec = 7094361 kbytes/sec
middle: 102400 kbytes in 0.013545 sec = 7559985 kbytes/sec
inside: 102400 kbytes in 0.013614 sec = 7521669 kbytes/sec

Asynchronous random reads:
sectorsize: 1867310 ops in 3.000057 sec = 622425 IOPS
4 kbytes: 1589498 ops in 3.000047 sec = 529824 IOPS
32 kbytes: 935622 ops in 3.000054 sec = 311868 IOPS
128 kbytes: 328937 ops in 3.001158 sec = 109603 IOPS

Synchronous random writes:
0.5 kbytes: 1.6 usec/IO = 299.9 Mbytes/s
1 kbytes: 1.7 usec/IO = 589.9 Mbytes/s
2 kbytes: 1.7 usec/IO = 1143.4 Mbytes/s
4 kbytes: 1.8 usec/IO = 2135.6 Mbytes/s
8 kbytes: 2.4 usec/IO = 3244.6 Mbytes/s
16 kbytes: 3.7 usec/IO = 4192.4 Mbytes/s
32 kbytes: 9.3 usec/IO = 3344.5 Mbytes/s
64 kbytes: 12.3 usec/IO = 5088.3 Mbytes/s
128 kbytes: 17.6 usec/IO = 7119.2 Mbytes/s
256 kbytes: 27.7 usec/IO = 9021.8 Mbytes/s
512 kbytes: 46.6 usec/IO = 10731.7 Mbytes/s
1024 kbytes: 84.4 usec/IO = 11853.0 Mbytes/s
2048 kbytes: 159.5 usec/IO = 12535.5 Mbytes/s
4096 kbytes: 314.3 usec/IO = 12726.1 Mbytes/s
8192 kbytes: 621.4 usec/IO = 12873.4 Mbytes/s
Results from fio runs (details below)

SSD Pool with PMEM slog
RW, QD1, 1J WRITE: bw=754MiB/s (791MB/s), 754MiB/s-754MiB/s (791MB/s-791MB/s), io=128GiB (137GB), run=173862-173862msec
RW, QD32, 8J WRITE: bw=1756MiB/s (1841MB/s), 219MiB/s-220MiB/s (230MB/s-230MB/s), io=1024GiB (1100GB), run=597115-597188msec
RandomW, QD1, 1J WRITE: bw=653MiB/s (684MB/s), 653MiB/s-653MiB/s (684MB/s-684MB/s), io=128GiB (137GB), run=200809-200809msec
RandomW, QD32, 8J WRITE: bw=1171MiB/s (1228MB/s), 146MiB/s-147MiB/s (153MB/s-154MB/s), io=1024GiB (1100GB), run=892222-895398msec

SSD Pool with 900p slog
RW, QD1, 1J WRITE: bw=483MiB/s (506MB/s), 483MiB/s-483MiB/s (506MB/s-506MB/s), io=128GiB (137GB), run=271507-271507msec
RandomW, QD1, 1J WRITE: bw=479MiB/s (503MB/s), 479MiB/s-479MiB/s (503MB/s-503MB/s), io=128GiB (137GB), run=273383-273383msec


Pmem directly
RW, QD1, 1J WRITE: bw=2640MiB/s (2768MB/s), 2640MiB/s-2640MiB/s (2768MB/s-2768MB/s), io=12.0GiB (12.9GB), run=4655-4655msec
RW, QD32, 8J WRITE: bw=12.3GiB/s (13.2GB/s), 1574MiB/s-1635MiB/s (1650MB/s-1715MB/s), io=96.0GiB (103GB), run=7515-7808msec
RandomW, QD1, 1J WRITE: bw=2682MiB/s (2813MB/s), 2682MiB/s-2682MiB/s (2813MB/s-2813MB/s), io=12.0GiB (12.9GB), run=4581-4581msec

900 p direct
RandomW, QD1, 1J WRITE: bw=1025MiB/s (1074MB/s), 1025MiB/s-1025MiB/s (1074MB/s-1074MB/s), io=12.0GiB (12.9GB), run=11992-11992msec




Happy to run further tests if you have a specific use case (different blocksizes or QDs/Jobs






Working Bios Settings on X11-nCTPF

1588408653578.png


NVDimm load for FreeNas
1588408820043.png



Some Fio runs (note - writecache [mem] not disabled, sync=always, part of a pool)
RW, QD1, 1J

Code:
 fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=rw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=128G  --name=test
1G64K: (g=0): rw=rw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=742MiB/s][w=11.9k IOPS][eta 00m:00s]
1G64K: (groupid=0, jobs=1): err= 0: pid=32131: Sat May  2 11:08:19 2020
  write: IOPS=12.1k, BW=754MiB/s (791MB/s)(128GiB/173862msec)
    slat (nsec): min=1034, max=14301k, avg=2041.90, stdev=20297.76
    clat (nsec): min=1134, max=39686k, avg=71201.16, stdev=155142.35
     lat (usec): min=52, max=39688, avg=73.24, stdev=157.45
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[   58], 10.00th=[   58], 20.00th=[   59],
     | 30.00th=[   59], 40.00th=[   60], 50.00th=[   60], 60.00th=[   60],
     | 70.00th=[   61], 80.00th=[   68], 90.00th=[   94], 95.00th=[  112],
     | 99.00th=[  143], 99.50th=[  269], 99.90th=[ 1090], 99.95th=[ 1975],
     | 99.99th=[ 6980]
   bw (  KiB/s): min=583153, max=926336, per=99.58%, avg=768749.18, stdev=117844.83, samples=347
   iops        : min= 9111, max=14474, avg=12011.23, stdev=1841.30, samples=347
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.05%, 50=0.06%
  lat (usec)   : 100=91.24%, 250=8.09%, 500=0.28%, 750=0.10%, 1000=0.05%
  lat (msec)   : 2=0.06%, 4=0.03%, 10=0.02%, 20=0.01%, 50=0.01%
  cpu          : usr=13.26%, sys=4.25%, ctx=2146943, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=754MiB/s (791MB/s), 754MiB/s-754MiB/s (791MB/s-791MB/s), io=128GiB (137GB), run=173862-173862msec
RW, QD32, 8J
Code:
fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=rw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=32 --numjobs=8 --size=128G  --name=1G64K
1G64K: (g=0): rw=rw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=32
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1680MiB/s][w=26.9k IOPS][eta 00m:00s]
<<<snip>>>
1G64K: (groupid=0, jobs=1): err= 0: pid=32184: Sat May  2 11:18:34 2020
  write: IOPS=3512, BW=220MiB/s (230MB/s)(128GiB/597121msec)
    slat (nsec): min=533, max=9743.9k, avg=1810.08, stdev=23033.40
    clat (usec): min=34, max=110095, avg=9026.98, stdev=4370.75
     lat (usec): min=139, max=110096, avg=9028.79, stdev=4370.20
    clat percentiles (usec):
     |  1.00th=[ 6390],  5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7308],
     | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8586],
     | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11469], 95.00th=[13829],
     | 99.00th=[17695], 99.50th=[20841], 99.90th=[83362], 99.95th=[87557],
     | 99.99th=[93848]
   bw (  KiB/s): min=103168, max=299816, per=12.48%, avg=224422.24, stdev=46086.56, samples=1194
   iops        : min= 1612, max= 4684, avg=3506.31, stdev=720.09, samples=1194
  lat (usec)   : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.06%, 4=0.19%, 10=78.53%, 20=20.62%, 50=0.30%
  lat (msec)   : 100=0.27%, 250=0.01%
  cpu          : usr=4.47%, sys=1.88%, ctx=1066658, majf=0, minf=2
  IO depths    : 1=0.1%, 2=0.1%, 4=0.3%, 8=3.4%, 16=74.0%, 32=22.3%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=94.2%, 8=3.1%, 16=1.9%, 32=0.8%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32
1G64K: (groupid=0, jobs=1): err= 0: pid=32185: Sat May  2 11:18:34 2020
  write: IOPS=3512, BW=220MiB/s (230MB/s)(128GiB/597123msec)
    slat (nsec): min=527, max=7837.0k, avg=1799.16, stdev=22667.55
    clat (usec): min=35, max=111656, avg=9026.38, stdev=4357.60
     lat (usec): min=137, max=111657, avg=9028.18, stdev=4357.08
    clat percentiles (usec):
     |  1.00th=[ 6325],  5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7308],
     | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8586],
     | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11469], 95.00th=[13829],
     | 99.00th=[17957], 99.50th=[21365], 99.90th=[82314], 99.95th=[87557],
     | 99.99th=[93848]
   bw (  KiB/s): min=102067, max=304798, per=12.48%, avg=224442.14, stdev=46092.92, samples=1194
   iops        : min= 1594, max= 4762, avg=3506.63, stdev=720.19, samples=1194
  lat (usec)   : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.06%, 4=0.17%, 10=78.58%, 20=20.57%, 50=0.31%
  lat (msec)   : 100=0.27%, 250=0.01%
  cpu          : usr=4.33%, sys=1.98%, ctx=1046671, majf=0, minf=2
  IO depths    : 1=0.1%, 2=0.1%, 4=0.3%, 8=3.4%, 16=74.2%, 32=22.1%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=94.2%, 8=3.1%, 16=1.9%, 32=0.8%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1756MiB/s (1841MB/s), 219MiB/s-220MiB/s (230MB/s-230MB/s), io=1024GiB (1100GB), run=597115-597188msec
Same with random writes
Code:
fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=32 --numjobs=8 --size=128G  --name=1G64K
1G64K: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=32
...
fio-3.16
Starting 8 processes
Jobs: 2 (f=2): [w(1),_(3),w(1),_(3)][99.7%][w=1190MiB/s][w=19.0k IOPS][eta 00m:03s]
<<<snip>>>
1G64K: (groupid=0, jobs=1): err= 0: pid=32463: Sat May  2 11:36:20 2020
  write: IOPS=2344, BW=147MiB/s (154MB/s)(128GiB/894658msec)
    slat (nsec): min=588, max=147590k, avg=250252.68, stdev=1001978.69
    clat (usec): min=18, max=268991, avg=7678.64, stdev=9294.11
     lat (usec): min=80, max=269201, avg=7928.89, stdev=9369.86
    clat percentiles (usec):
     |  1.00th=[   225],  5.00th=[   701], 10.00th=[  1221], 20.00th=[  2180],
     | 30.00th=[  3195], 40.00th=[  4228], 50.00th=[  5276], 60.00th=[  6521],
     | 70.00th=[  8094], 80.00th=[ 10683], 90.00th=[ 15401], 95.00th=[ 20841],
     | 99.00th=[ 52167], 99.50th=[ 66323], 99.90th=[ 88605], 99.95th=[ 94897],
     | 99.99th=[104334]
   bw (  KiB/s): min=86272, max=286914, per=12.50%, avg=149866.19, stdev=17830.29, samples=1789
   iops        : min= 1348, max= 4483, avg=2341.57, stdev=278.60, samples=1789
  lat (usec)   : 20=0.01%, 50=0.13%, 100=0.24%, 250=0.79%, 500=2.01%
  lat (usec)   : 750=2.30%, 1000=2.42%
  lat (msec)   : 2=10.12%, 4=19.78%, 10=40.25%, 20=16.45%, 50=4.41%
  lat (msec)   : 100=1.08%, 250=0.02%, 500=0.01%
  cpu          : usr=4.29%, sys=1.47%, ctx=7545410, majf=0, minf=2
  IO depths    : 1=0.9%, 2=4.7%, 4=11.6%, 8=25.0%, 16=54.1%, 32=3.6%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=96.7%, 8=0.1%, 16=0.2%, 32=3.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1171MiB/s (1228MB/s), 146MiB/s-147MiB/s (153MB/s-154MB/s), io=1024GiB (1100GB), run=892222-895398msec
Finally Random rw, 1J QD1
Code:
 fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=128G  --name=1G64K
1G64K: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=657MiB/s][w=10.5k IOPS][eta 00m:00s]
1G64K: (groupid=0, jobs=1): err= 0: pid=32788: Sat May  2 11:41:10 2020
  write: IOPS=10.4k, BW=653MiB/s (684MB/s)(128GiB/200809msec)
    slat (nsec): min=1041, max=7018.8k, avg=2550.84, stdev=21081.44
    clat (nsec): min=1252, max=44413k, avg=83388.57, stdev=214042.72
     lat (usec): min=54, max=44418, avg=85.94, stdev=216.18
    clat percentiles (usec):
     |  1.00th=[   58],  5.00th=[   60], 10.00th=[   60], 20.00th=[   61],
     | 30.00th=[   62], 40.00th=[   62], 50.00th=[   63], 60.00th=[   64],
     | 70.00th=[   65], 80.00th=[   72], 90.00th=[   91], 95.00th=[  149],
     | 99.00th=[  326], 99.50th=[  685], 99.90th=[ 2409], 99.95th=[ 3752],
     | 99.99th=[ 8455]
   bw (  KiB/s): min=404311, max=885881, per=99.56%, avg=665444.69, stdev=149411.19, samples=401
   iops        : min= 6317, max=13841, avg=10397.13, stdev=2334.52, samples=401
  lat (usec)   : 2=0.01%, 4=0.02%, 10=0.01%, 20=0.08%, 50=0.11%
  lat (usec)   : 100=91.00%, 250=7.34%, 500=0.77%, 750=0.21%, 1000=0.12%
  lat (msec)   : 2=0.20%, 4=0.09%, 10=0.04%, 20=0.01%, 50=0.01%
  cpu          : usr=11.56%, sys=3.67%, ctx=2171962, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=653MiB/s (684MB/s), 653MiB/s-653MiB/s (684MB/s-684MB/s), io=128GiB (137GB), run=200809-200809msec
Here some runs directly on the pmem (no zfs active so probably async)
Code:
# fio --filename=/dev/pmem0  --direct=1 --rw=rw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=12G  --name=test
test: (g=0): rw=rw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=2647MiB/s][w=42.3k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=35625: Sat May  2 14:06:28 2020
  write: IOPS=42.2k, BW=2640MiB/s (2768MB/s)(12.0GiB/4655msec)
    slat (nsec): min=2431, max=53275, avg=4076.88, stdev=309.91
    clat (usec): min=7, max=185, avg=10.36, stdev= 1.53
     lat (usec): min=11, max=190, avg=14.44, stdev= 1.59
    clat percentiles (nsec):
     |  1.00th=[ 9920],  5.00th=[10048], 10.00th=[10048], 20.00th=[10176],
     | 30.00th=[10176], 40.00th=[10176], 50.00th=[10304], 60.00th=[10304],
     | 70.00th=[10304], 80.00th=[10304], 90.00th=[10432], 95.00th=[10560],
     | 99.00th=[11968], 99.50th=[16512], 99.90th=[35072], 99.95th=[36096],
     | 99.99th=[62208]
   bw (  MiB/s): min= 2566, max= 2652, per=99.45%, avg=2625.15, stdev=28.06, samples=9
   iops        : min=41057, max=42445, avg=42001.67, stdev=449.22, samples=9
  lat (usec)   : 10=2.41%, 20=97.32%, 50=0.24%, 100=0.03%, 250=0.01%
  cpu          : usr=41.21%, sys=26.62%, ctx=196621, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=2640MiB/s (2768MB/s), 2640MiB/s-2640MiB/s (2768MB/s-2768MB/s), io=12.0GiB (12.9GB), run=4655-4655msec
root@freenas11[~]# fio --filename=/dev/pmem0  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=2646MiB/s][w=42.3k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=35629: Sat May  2 14:06:54 2020
  write: IOPS=42.3k, BW=2644MiB/s (2773MB/s)(12.0GiB/4647msec)
    slat (nsec): min=2372, max=60753, avg=4036.89, stdev=309.55
    clat (usec): min=8, max=306, avg=10.41, stdev= 1.81
     lat (usec): min=11, max=312, avg=14.45, stdev= 1.88
    clat percentiles (nsec):
     |  1.00th=[ 9920],  5.00th=[10048], 10.00th=[10048], 20.00th=[10176],
     | 30.00th=[10176], 40.00th=[10304], 50.00th=[10304], 60.00th=[10304],
     | 70.00th=[10432], 80.00th=[10432], 90.00th=[10560], 95.00th=[10560],
     | 99.00th=[11968], 99.50th=[16512], 99.90th=[35072], 99.95th=[38656],
     | 99.99th=[68096]
   bw (  MiB/s): min= 2586, max= 2654, per=99.17%, avg=2622.22, stdev=26.78, samples=9
   iops        : min=41387, max=42471, avg=41955.11, stdev=428.89, samples=9
  lat (usec)   : 10=3.01%, 20=96.70%, 50=0.25%, 100=0.03%, 250=0.01%
  lat (usec)   : 500=0.01%
  cpu          : usr=43.65%, sys=23.87%, ctx=196647, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=2644MiB/s (2773MB/s), 2644MiB/s-2644MiB/s (2773MB/s-2773MB/s), io=12.0GiB (12.9GB), run=4647-4647msec
root@freenas11[~]# fio --filename=/dev/pmem0  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=32 --numjobs=8 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=32
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [w(8)][100.0%][w=12.4GiB/s][w=204k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=35652: Sat May  2 14:07:49 2020
  write: IOPS=25.5k, BW=1595MiB/s (1673MB/s)(12.0GiB/7702msec)
<<<snip>>>
test: (groupid=0, jobs=1): err= 0: pid=35659: Sat May  2 14:07:49 2020
  write: IOPS=25.3k, BW=1580MiB/s (1657MB/s)(12.0GiB/7778msec)
    slat (usec): min=3, max=13621, avg= 7.75, stdev=60.43
    clat (usec): min=28, max=27345, avg=1191.44, stdev=926.32
     lat (usec): min=35, max=27353, avg=1199.19, stdev=928.95
    clat percentiles (usec):
     |  1.00th=[   84],  5.00th=[  157], 10.00th=[  245], 20.00th=[  420],
     | 30.00th=[  586], 40.00th=[  783], 50.00th=[  988], 60.00th=[ 1188],
     | 70.00th=[ 1434], 80.00th=[ 1860], 90.00th=[ 2507], 95.00th=[ 3032],
     | 99.00th=[ 3884], 99.50th=[ 4146], 99.90th=[ 4752], 99.95th=[ 4948],
     | 99.99th=[14222]
   bw (  MiB/s): min= 1521, max= 1630, per=12.53%, avg=1577.89, stdev=30.71, samples=15
   iops        : min=24346, max=26086, avg=25246.27, stdev=491.32, samples=15
  lat (usec)   : 50=0.04%, 100=1.74%, 250=8.55%, 500=14.24%, 750=14.06%
  lat (usec)   : 1000=12.15%
  lat (msec)   : 2=31.83%, 4=16.64%, 10=0.71%, 20=0.04%, 50=0.01%
  cpu          : usr=33.16%, sys=27.50%, ctx=63195, majf=0, minf=2
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.8%, 16=67.9%, 32=31.1%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=94.7%, 8=3.6%, 16=1.5%, 32=0.2%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=12.3GiB/s (13.2GB/s), 1574MiB/s-1635MiB/s (1650MB/s-1715MB/s), io=96.0GiB (103GB), run=7515-7808msec
And here for fun the comparison of QD1 J1 against a 280GB 900p (directly on device)
Code:
fio --filename=/dev/pmem0  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=2669MiB/s][w=42.7k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2656: Sat May  2 14:45:25 2020
  write: IOPS=42.9k, BW=2682MiB/s (2813MB/s)(12.0GiB/4581msec)
    slat (nsec): min=2310, max=56999, avg=3311.99, stdev=336.49
    clat (nsec): min=1847, max=143475, avg=10012.08, stdev=1546.86
     lat (usec): min=11, max=146, avg=13.32, stdev= 1.64
    clat percentiles (nsec):
     |  1.00th=[ 9536],  5.00th=[ 9664], 10.00th=[ 9664], 20.00th=[ 9792],
     | 30.00th=[ 9792], 40.00th=[ 9792], 50.00th=[ 9920], 60.00th=[ 9920],
     | 70.00th=[ 9920], 80.00th=[10048], 90.00th=[10048], 95.00th=[10304],
     | 99.00th=[11968], 99.50th=[16512], 99.90th=[34560], 99.95th=[36096],
     | 99.99th=[61696]
   bw (  MiB/s): min= 2623, max= 2750, per=99.41%, avg=2666.48, stdev=41.09, samples=9
   iops        : min=41976, max=44001, avg=42663.11, stdev=657.32, samples=9
  lat (usec)   : 2=0.01%, 10=80.02%, 20=19.68%, 50=0.27%, 100=0.02%
  lat (usec)   : 250=0.01%
  cpu          : usr=48.28%, sys=20.76%, ctx=196647, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=2682MiB/s (2813MB/s), 2682MiB/s-2682MiB/s (2813MB/s-2813MB/s), io=12.0GiB (12.9GB), run=4581-4581msec
root@freenas11[~]# fio --filename=/dev/nvd0p1  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=1022MiB/s][w=16.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2669: Sat May  2 14:45:45 2020
  write: IOPS=16.4k, BW=1025MiB/s (1074MB/s)(12.0GiB/11992msec)
    slat (nsec): min=2211, max=33186, avg=4289.05, stdev=1467.91
    clat (usec): min=31, max=262, avg=44.61, stdev=17.46
     lat (usec): min=37, max=268, avg=48.90, stdev=17.61
    clat percentiles (usec):
     |  1.00th=[   37],  5.00th=[   37], 10.00th=[   37], 20.00th=[   37],
     | 30.00th=[   37], 40.00th=[   37], 50.00th=[   39], 60.00th=[   41],
     | 70.00th=[   43], 80.00th=[   44], 90.00th=[   65], 95.00th=[   89],
     | 99.00th=[  118], 99.50th=[  133], 99.90th=[  157], 99.95th=[  174],
     | 99.99th=[  215]
   bw (  KiB/s): min=1033434, max=1063217, per=99.77%, avg=1046826.91, stdev=8184.10, samples=23
   iops        : min=16147, max=16612, avg=16356.26, stdev=127.87, samples=23
  lat (usec)   : 50=87.08%, 100=10.72%, 250=2.19%, 500=0.01%
  cpu          : usr=22.49%, sys=11.82%, ctx=196624, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=1025MiB/s (1074MB/s), 1025MiB/s-1025MiB/s (1074MB/s-1074MB/s), io=12.0GiB (12.9GB), run=11992-11992msec
Snippet of the same with QD32, 8J (pmem vs 900p)
Code:
 fio --filename=/dev/pmem0  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=32 --numjobs=8 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=32
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [w(8)][100.0%][w=12.8GiB/s][w=210k IOPS][eta 00m:00s]

fio --filename=/dev/nvd0p1  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=32 --numjobs=8 --size=12G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=32
...
fio-3.16
Starting 8 processes
Jobs: 3 (f=3): [_(4),w(3),_(1)][100.0%][w=1908MiB/s][w=30.5k IOPS][eta 00m:00s]

And for good measure the same pool from above (first set of tests) now with optane slog (280g 900p) instead of pmem:

Code:
Stream:
fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=rw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=128G  --name=test
test: (g=0): rw=rw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=474MiB/s][w=7585 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2963: Sat May  2 14:53:36 2020
  write: IOPS=7724, BW=483MiB/s (506MB/s)(128GiB/271507msec)
    slat (nsec): min=1008, max=45884k, avg=2250.10, stdev=37633.41
    clat (nsec): min=1371, max=253576k, avg=116384.47, stdev=231731.09
     lat (usec): min=80, max=253578, avg=118.63, stdev=235.53
    clat percentiles (usec):
     |  1.00th=[   83],  5.00th=[   84], 10.00th=[   85], 20.00th=[   86],
     | 30.00th=[   87], 40.00th=[   90], 50.00th=[   96], 60.00th=[  102],
     | 70.00th=[  115], 80.00th=[  137], 90.00th=[  165], 95.00th=[  190],
     | 99.00th=[  285], 99.50th=[  396], 99.90th=[ 1303], 99.95th=[ 2008],
     | 99.99th=[ 5604]
   bw (  KiB/s): min=337920, max=567313, per=99.43%, avg=491540.33, stdev=52506.75, samples=543
   iops        : min= 5280, max= 8864, avg=7679.88, stdev=820.42, samples=543
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (usec)   : 100=57.11%, 250=41.59%, 500=0.89%, 750=0.15%, 1000=0.08%
  lat (msec)   : 2=0.10%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 500=0.01%
  cpu          : usr=9.42%, sys=2.93%, ctx=2159386, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=483MiB/s (506MB/s), 483MiB/s-483MiB/s (506MB/s-506MB/s), io=128GiB (137GB), run=271507-271507msec

randrw
fio --filename=/mnt/ss300/nfs/test  --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=64k --rwmixread=0 --iodepth=1 --numjobs=1 --size=128G  --name=test
test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=457MiB/s][w=7312 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3050: Sat May  2 14:58:17 2020
  write: IOPS=7671, BW=479MiB/s (503MB/s)(128GiB/273383msec)
    slat (nsec): min=1013, max=7364.2k, avg=2157.55, stdev=19040.70
    clat (nsec): min=1474, max=322584k, avg=117235.62, stdev=298368.14
     lat (usec): min=79, max=322585, avg=119.39, stdev=299.76
    clat percentiles (usec):
     |  1.00th=[   84],  5.00th=[   86], 10.00th=[   87], 20.00th=[   88],
     | 30.00th=[   89], 40.00th=[   90], 50.00th=[   94], 60.00th=[   99],
     | 70.00th=[  109], 80.00th=[  127], 90.00th=[  159], 95.00th=[  172],
     | 99.00th=[  265], 99.50th=[  586], 99.90th=[ 2507], 99.95th=[ 3916],
     | 99.99th=[ 8160]
   bw (  KiB/s): min=279100, max=598994, per=99.39%, avg=487959.87, stdev=75674.69, samples=546
   iops        : min= 4360, max= 9359, avg=7623.90, stdev=1182.42, samples=546
  lat (usec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (usec)   : 100=61.35%, 250=37.51%, 500=0.55%, 750=0.16%, 1000=0.10%
  lat (msec)   : 2=0.17%, 4=0.09%, 10=0.04%, 20=0.01%, 50=0.01%
  lat (msec)   : 500=0.01%
  cpu          : usr=9.43%, sys=3.02%, ctx=2150513, majf=0, minf=2
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=479MiB/s (503MB/s), 479MiB/s-479MiB/s (503MB/s-503MB/s), io=128GiB (137GB), run=273383-273383msec
 

Attachments

Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Really would have thought that more ppl would be interested :)

If you're worried it wont run then I am happy to get you a working system, got some spare 3647 boards /CPUs/ memory...

Or o/c happy to help you build a system:)
 
Last edited:
  • Like
Reactions: Samir

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
what do you want to see?
memory modules, cables or PowerGems?

Nothing fancy on either but happy to snap a few pics:)
 
  • Like
Reactions: Samir

tiernano

New Member
Dec 14, 2016
20
3
3
42
I am going to ask a couple of potential stupid questions.... got an X11 SuperMicro board, and its currently got 16 slots, 6 used by RAM... If putting these in, do i need just 1 or 2? In the photos i can see the DIMM itself, but what is the other thing that looks like an SSD? Hows that work?

Thanks.
 
  • Like
Reactions: Samir

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Thats the power Gem which keeps the memory powered on powerloss , like a raid controller battery.

You need as many as you want for whatever you do with it - mine are 16G modules, so I use a single one for ZFS, but you could put 8 in for a 128G persistent memory fast block device.

I currently have 5 mem modules installed (for 512G) and one pmem, as it was not clear whether it affected interleaving, bandwiith etc. Also this was the best combination of large modules and nvdimm I was able to get to work properly.
 
  • Like
Reactions: Samir

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
bump.
Sending outside EU might be possible, will need to check on a per case basis
 
  • Like
Reactions: Samir

Bjorn Smith

Well-Known Member
Sep 3, 2019
893
495
63
50
r00t.dk
@Rand_ stupid question - but aren't there any NVDIMM's available that have everything on board, i.e. power capacitors etc? It seems like a "intermediate" design that you need a external "battery" - I understand if you want to use it as a persistant harddrive that can live for "days" - but for ZFS purposes it only needs to hold power until it has flushed the memory to its built in flash chips - and those do not need power.
To quote from wikipedia "Therefore, modern NVDIMMs use on-board supercapacitors to store energy."

And I am sorry - I am not trying to derail your sell, I am just trying to understand if a powergem is yesterday's tech or still normal?
 
  • Like
Reactions: Samir

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Actually there are two ways if supplying power, with an individual cell (what I have) or through a separate power supply/ventral battery which supplies power over the bus.
would have preferred the latter but thats quite expensive (for SM), so was fine with the PowerGems.

The question is for how long do you want to keep the data save; o/c for an slog a couple of seconds are sufficient as long as there is power to support your main storage device.
But if the power goes down and your main drive does not take the data from your slog any more, do you want to loose it before you can replace the psu or the generator is up?
No, its goal is to keep the data safe in memory (normally non persistent, its not flash) untill power has been restored and it can saved on a inherent persistent device

In regard to everything on board - memory modules have very limited space, so you can't put enough power "on board" to sustain hours or days of downtime.

Bear in mind that nvdimm's original purpose what not being a cache device pe sè, but you also can use it as a high performance drive for a smaller database or whatever other data you have that needs to survive a power outage but needs memory like performance.

O/c for the slog a single module is enough, but its not problem to join multiple into a 128G device if you have enough memory slots.
That's why Intel is doing optane memory now (NVDimm-P) - slower than -N, but significantly larger.
 
  • Like
Reactions: Samir

zackiv31

Member
May 16, 2016
100
20
18
40
Also going to ask a dumb question. If my motherboard supports Intel Optane memory, are these better/worse than say a 128gb Intel Optane DIMM (which I think go for ~$200 on eBay). What're the pros/cons for either?
 
  • Like
Reactions: Samir

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Well I am afraid the usual answer applies - it depends.

NVDimm-N are actually faster than Optane Memory (NVDimm-P) due to their design. NVDimm-N's are actual memory modules that have an add on battery that enables them to keep the volatile flash cells under power to keep the stored data save.
The nice thing is, if you don't want to run persistent memory you simply can use the NVDimm-N's as regular memory modules (at least when I was looking they had been cheaper than a similar sized regular memory module ;))

NVDimm-P is basically working similar to regular flash ssds with a volatile and a persistent part. Its certainly easier to run no extra battery module), but in order to work you not only need a MoBo that supports it but also at least a Scalable Refresh CPU (for Optane 100, or even newer for the 200+ option).

I have not seen any comparison of performance between the two (the one I found was not accessible any more), and since my TrueNas box runs a 5122 I have not played with Optane (100). But based on the SNIA presentation linked below its fairly safe to say that inherently N is faster than P. P's benefits are that its easier to deploy and its larger size

Edit:
This contains performance for an Optane DCPMM module.
Also see this thread for a discussion:

Whether you need better performance or benefit more from the larger memory size is totally dependent on your use case - for an slog the answer is clear, latency is key and there -N wins.

Edit: Quote from Intel: (Optane Persistent Memory vs Optane SSDs – confused? Then read on – Blocks and Files)

The characteristics of Optane™ are such that its latency is higher than DDR4 DRAM, meaning it is slower to access, but close enough to DRAM speed that it can be treated as a slower tier of main memory. For example, Optane™ DC Persistent Memory has latency up to about 350ns, compared with 10 to 20ns for DDR4, but this still makes it up to a thousand times faster than the NAND flash used in the vast majority of SSDs.


Another key attribute of Optane™ DC Persistent Memory is that is currently available in higher capacities than DRAM modules, at up to 512GB. It also costs less than DRAM, with some sources putting the cost of a 512GB Optane™ DIMM at about the same price as the highest capacity 256GB DRAM DIMM.







1608275971634.png
 
Last edited:
  • Like
Reactions: Samir and zackiv31

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Found a nice article...

O/c still cant get 'em to run on ESXi, but SM seems willing to help with some Bios adjustments (makeing settings accessible)...
So still have hope :)

Edit:
SM rejected to help - the above apparently was referring to DCPMM and NVDimm-N has been not been selling well for them so they stopped supporting it and thus will not spend any time on getting this to run with ESXi.
 
Last edited:
  • Like
Reactions: Samir and gb00s