[ebay BO] RMS-200/8G PCI-e NVRAM Accelerator 360$

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

oxynazin

Member
Dec 10, 2018
38
21
8
RMS-200/8G PCI-e NVRAM Accelerator | eBay

Maybe not a good deal but interesting and rare hardware anyway.

On-Board Ultracapacitors (no remote ultracapacitor pack required)
Device Drivers NVMe Linux 2 2.6.38 and above
Write Throughput: 5 GB/s
Random 4K Write IOPS: 900K
Read Throughput: 5.2 GB/s
Random 4K Read IOPS: 1 Mil.

http://www.radianmemory.com/wp-content/uploads/2016/07/RMS-200-Data-Sheet-ver-1-7.pdf

Cannot find much info. User guides are opened only for customers according to the site.
Anybody have experience with this or other RadianMemory products?
 
  • Like
Reactions: Patrick

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,257
1,446
113
49
HSV and SFO
I think this is the same seller @BLinux posted on a different thread to contact directly and make reasonable offer for liquidation items.
I believe it is a different company. I ordered from the one in that thread and don't remember their listings looking like this on ebay.
Hey, at least it's more accurate than those that have a picture of a ceramic rhinoceros instead. It probably will arrive in a box. :)
HAHAHA!!! I've seen that company's listings too. :D

I get it about branding and whatnot, but it is kinda gimiacky when it is just a rhino in the pick and not one near or on top of the item. o_O
 

Monoman

Active Member
Oct 16, 2013
408
160
43
I have an optane 905p 480gb on my 24 drive array (s3700 200gb drives) and it barely gets used as is. I'm using ZoL under proxmox and zil is dog slow there. I'm not in any condition to test as this system is semi production.... But I will benchmark it before moving it onto the zpool.
 
  • Like
Reactions: Patrick

rshakin

New Member
Jan 15, 2019
29
6
3
So any updates on this... very interesting if this would work. I have something similar for my zil it works great but no driver support and getting drivers have been an uphill battle for me. Otherwise it works great until you do a full power off on the system. For now I am removing it on power cycle from the storage pool via some scripts and reinserting it when the server comes up. Very dirty and not great way since if my ups for some reason dies my zfs pools go down all because in order to enable flush to non volatile memory you have to have the cli command utility to enable that feature. So let us know, how it all works out for you. I was able to get sub 40us access time on mine with 4k blocks
 
Last edited:

Monoman

Active Member
Oct 16, 2013
408
160
43
I have it on my test bench still, not installed as I've not scheduled a window of down time yet. I'm trying!!
 

Monoman

Active Member
Oct 16, 2013
408
160
43
Here's some FIO numbers.

Code:
fio --filename=/dev/nvme1n1 --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=50 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest

Optane 905p 480gb

Code:
4ktest: (groupid=0, jobs=16): err= 0: pid=21041: Wed May  1 13:47:07 2019
   read: IOPS=140k, BW=546MiB/s (573MB/s)(31.0GiB/60002msec)
    slat (usec): min=2, max=6925, avg=73.20, stdev=40.12
    clat (usec): min=10, max=8248, avg=860.32, stdev=206.83
     lat (usec): min=13, max=8352, avg=933.93, stdev=213.17
    clat percentiles (usec):
     |  1.00th=[  486],  5.00th=[  586], 10.00th=[  644], 20.00th=[  709],
     | 30.00th=[  758], 40.00th=[  807], 50.00th=[  848], 60.00th=[  889],
     | 70.00th=[  938], 80.00th=[  996], 90.00th=[ 1090], 95.00th=[ 1156],
     | 99.00th=[ 1352], 99.50th=[ 1467], 99.90th=[ 2409], 99.95th=[ 3720],
     | 99.99th=[ 5342]
   bw (  KiB/s): min=31880, max=36824, per=6.25%, avg=34948.77, stdev=880.07, samples=1905
   iops        : min= 7970, max= 9206, avg=8737.17, stdev=220.02, samples=1905
  write: IOPS=140k, BW=546MiB/s (572MB/s)(31.0GiB/60002msec)
    slat (usec): min=2, max=6911, avg=34.65, stdev=44.89
    clat (usec): min=8, max=8189, avg=858.27, stdev=206.16
     lat (usec): min=16, max=8205, avg=893.35, stdev=213.34
    clat percentiles (usec):
     |  1.00th=[  486],  5.00th=[  586], 10.00th=[  635], 20.00th=[  709],
     | 30.00th=[  758], 40.00th=[  807], 50.00th=[  848], 60.00th=[  889],
     | 70.00th=[  938], 80.00th=[  996], 90.00th=[ 1074], 95.00th=[ 1156],
     | 99.00th=[ 1352], 99.50th=[ 1450], 99.90th=[ 2376], 99.95th=[ 3720],
     | 99.99th=[ 5342]
   bw (  KiB/s): min=31424, max=37320, per=6.25%, avg=34939.35, stdev=964.80, samples=1905
   iops        : min= 7856, max= 9330, avg=8734.82, stdev=241.20, samples=1905
  lat (usec)   : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
  lat (usec)   : 500=1.32%, 750=26.77%, 1000=52.42%
  lat (msec)   : 2=19.36%, 4=0.09%, 10=0.04%
  cpu          : usr=4.13%, sys=17.81%, ctx=14697408, majf=0, minf=228
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=8387308,8384795,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=546MiB/s (573MB/s), 546MiB/s-546MiB/s (573MB/s-573MB/s), io=31.0GiB (34.4GB), run=60002-60002msec
  WRITE: bw=546MiB/s (572MB/s), 546MiB/s-546MiB/s (572MB/s-572MB/s), io=31.0GiB (34.3GB), run=60002-60002msec

Disk stats (read/write):
  nvme0n1: ios=8369618/8368016, merge=0/560, ticks=172122/154835, in_queue=0, util=100.00%

RMS200/8GB


Code:
4ktest: (groupid=0, jobs=16): err= 0: pid=21406: Wed May  1 13:47:58 2019
   read: IOPS=621k, BW=2425MiB/s (2543MB/s)(63.0GiB/27005msec)
    slat (nsec): min=1400, max=39873k, avg=2815.80, stdev=39765.27
    clat (usec): min=13, max=40062, avg=340.23, stdev=200.48
     lat (usec): min=16, max=40069, avg=343.39, stdev=204.32
    clat percentiles (usec):
     |  1.00th=[  223],  5.00th=[  281], 10.00th=[  297], 20.00th=[  314],
     | 30.00th=[  322], 40.00th=[  334], 50.00th=[  338], 60.00th=[  347],
     | 70.00th=[  355], 80.00th=[  367], 90.00th=[  379], 95.00th=[  392],
     | 99.00th=[  416], 99.50th=[  433], 99.90th=[  685], 99.95th=[  979],
     | 99.99th=[ 5997]
   bw (  KiB/s): min=149096, max=161400, per=6.33%, avg=157185.84, stdev=1743.88, samples=848
   iops        : min=37274, max=40350, avg=39296.42, stdev=435.97, samples=848
  write: IOPS=620k, BW=2424MiB/s (2542MB/s)(63.9GiB/27005msec)
    slat (nsec): min=1430, max=39877k, avg=2846.35, stdev=43966.67
    clat (nsec): min=790, max=40068k, avg=58458.92, stdev=144154.61
     lat (usec): min=12, max=40077, avg=61.64, stdev=150.77
    clat percentiles (usec):
     |  1.00th=[   25],  5.00th=[   33], 10.00th=[   38], 20.00th=[   44],
     | 30.00th=[   48], 40.00th=[   52], 50.00th=[   56], 60.00th=[   60],
     | 70.00th=[   65], 80.00th=[   71], 90.00th=[   80], 95.00th=[   88],
     | 99.00th=[  112], 99.50th=[  130], 99.90th=[  217], 99.95th=[  306],
     | 99.99th=[  922]
   bw (  KiB/s): min=147720, max=162816, per=6.33%, avg=157107.14, stdev=2134.38, samples=848
   iops        : min=36930, max=40704, avg=39276.75, stdev=533.61, samples=848
  lat (nsec)   : 1000=0.01%
  lat (usec)   : 10=0.01%, 20=0.10%, 50=17.94%, 100=31.07%, 250=1.69%
  lat (usec)   : 500=49.07%, 750=0.09%, 1000=0.02%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=16.58%, sys=36.83%, ctx=11878808, majf=0, minf=193
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=16765336,16756328,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=2425MiB/s (2543MB/s), 2425MiB/s-2425MiB/s (2543MB/s-2543MB/s), io=63.0GiB (68.7GB), run=27005-27005msec
  WRITE: bw=2424MiB/s (2542MB/s), 2424MiB/s-2424MiB/s (2542MB/s-2542MB/s), io=63.9GiB (68.6GB), run=27005-27005msec

Disk stats (read/write):
  nvme1n1: ios=16756937/16747592, merge=0/0, ticks=5514539/830109, in_queue=0, util=99.10%
 
  • Like
Reactions: oxynazin

rshakin

New Member
Jan 15, 2019
29
6
3
Does it keep partitions after shutdown power cycle. See if data persists on power loss if possible
 

Monoman

Active Member
Oct 16, 2013
408
160
43
IO ping results.

OPTANE

Code:
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=1 time=22.3 us (warmup)
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=2 time=107.3 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=3 time=98.0 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=4 time=60.2 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=5 time=102.2 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=6 time=62.1 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=7 time=47.1 us (fast)
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=8 time=53.1 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=9 time=63.1 us
4 KiB <<< . (ext4 /dev/nvme0n1p1): request=10 time=52.7 us

RMS200


Code:
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=1 time=41.9 us (warmup)
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=2 time=48.7 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=3 time=44.0 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=4 time=62.6 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=5 time=53.7 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=6 time=45.0 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=7 time=43.5 us (fast)
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=8 time=57.4 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=9 time=46.3 us
4 KiB <<< /dev/nvme1n1 (block device 7.99 GiB): request=10 time=44.2 us (fast)