New Viking RAM-Drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Sadly no numbers about performance, especially for 4k random read or writes @ 1qd/1thread :/
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I have my doubts that this will conquer the 'king of ZIL'...AKA Optane
 

zir_blazer

Active Member
Dec 5, 2016
355
128
43
What is the expected use case for these? What can be small enough to fit in a RAMDisk, be random I/O intensive and require unlimited endurance? I can think only about a small scratch disk, or as a buffer for heavy logging before being copied sequencially to a HD or so. The whitepaper mentions logging and caching.
RAMDisk performance is heavily capped if you use them via an interface that isn't the standard memory Bus, so from a performance standpoint, a Hardware RAMDisk capped to NVMe is far inferior to a Software RAMDisk that uses system RAM. The only thing that makes these useful is when you need to treat them as persistent storage due to the battery. However, since now they would be competing against Optane, that significantly increased random I/O performance compared to standard NAND, the gap where the cost/size ratio and volatile inconvenience may justify those Hardware RAMDisk Devices is even less than before, and with RAM prices sky high... Why bother?

In a virtualized environment, you can actually use a non-volatile Software RAMDisk by creating them in the host and passing them to the VM as a raw block device with a paravirtualized controller, so for as long that you don't reboot the host, you can reboot the guest and still have the data in the RAMDisk. However, the paravirtualized controller and the File System significantly blows the performance. I still didn't figure out if you can use some Direct I/O (Avoids File System overhead) RAMDisk type, but seems than that would require to create it inside the VM with standard RAMDisk Software...
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
In a virtualized environment, you can actually use a non-volatile Software RAMDisk by creating them in the host and passing them to the VM as a raw block device with a paravirtualized controller, so for as long that you don't reboot the host, you can reboot the guest and still have the data in the RAMDisk. However, the paravirtualized controller and the File System significantly blows the performance. I still didn't figure out if you can use some Direct I/O (Avoids File System overhead) RAMDisk type, but seems than that would require to create it inside the VM with standard RAMDisk Software...
Can you point me to a way to create ramdisk on ESXi host and use that ramdisk to host a data store? I don't think it's possible on esxi. The work around is to create a ramdisk on a VM then serve it back to the host to put datastore on. That creates a bunch nore unneeded traffics.

This is all for experiment only. Right now I am happy with ramdisk inside the guest and use it for the guest only.
 

zir_blazer

Active Member
Dec 5, 2016
355
128
43
Can you point me to a way to create ramdisk on ESXi host and use that ramdisk to host a data store? I don't think it's possible on esxi. The work around is to create a ramdisk on a VM then serve it back to the host to put datastore on. That creates a bunch nore unneeded traffics.

This is all for experiment only. Right now I am happy with ramdisk inside the guest and use it for the guest only.
I don't use ESXi, and know little about it. I use QEMU-KVM with an Arch Linux host. There are two ways to create a RAMDisk in Linux.

The most know one is to use Linux tmpfs File System (There is another one called ramfs but tmpfs is supposedly better):

mount -t tmpfs -o size=20G tmpfs /root/vms/ramdisk

Directory has to exist previously.
You need to create a VM image file (Think on raw or qcow) that is inside of that location, and modify your VM config file to add it as another drive.


Another alternative is to use RAM as a Block Device. It requires to load a rather unknow Kernel Module:

modprobe brd rd_nr=1 max_part=1 rd_size=20971520

This creates the /dev/ram0 location, which you pass to the VM as if you were feeding it an entire HD (/dev/sda), partition (/dev/sda2), or LVM volume (/dev/mapper/vg0-lv0 or something like that).
The brd Kernel Module is rather inflexible since you can't individually create different sized RAMDisks, rd_nr only allows you to create multiple RAMDisks of rd_size.


I never bothered to benchmark, but the raw Block Device theorically should be faster. Yet another alternative would be to create a RAMDisk with either, but use some Software like samba to use those locations as shared directories. You would remove the overhead from the IDE/AHCI/SCSI/LightNVM/VirtIO Block Device Controllers and pass it to the VirtIO NIC instead. But this maybe omits a File System layer.
 
  • Like
Reactions: nk215

Stux

Member
May 29, 2017
30
10
8
46
What is the expected use case for these? What can be small enough to fit in a RAMDisk, be random I/O intensive and require unlimited endurance
Sounds like you just described a ZFS SLOG device. Except its write-almost-always.

They key with a ZFS log is it needs to finish writing super-duper-quick and return.
 
  • Like
Reactions: _alex and aero

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Sounds like you just described a ZFS SLOG device. Except its write-almost-always.

They key with a ZFS log is it needs to finish writing super-duper-quick and return.
Which at very high networking speeds is exactly what you want it for (as I originally stated), latency for RAM is still at least an order of magnitude faster than Flash/Optane

The only downside I see to this is that the payload size is severely limited 128B or 256B configurable, but for ZIL/SLOG this is a fantastic solution.
 
  • Like
Reactions: Stux

psannz

Member
Jun 15, 2016
79
19
8
39
To be honest, I don't see 16GB as big enough for ZFS. Unless you reduced the flush time from the default 5 seconds to 2-3s. Or went with bigger module sizes.
As it stands, those 16GB are maxed out after 6s of 2x10 Gbit writes.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
The questions is how do you use the slog... For all writes(=sync = always)? For sync writes only (small iops?)?
Depending on how you answer the question the 16gb can be sufficient.