New Viking RAM-Drives

Discussion in 'Hard Drives and Solid State Drives' started by capn_pineapple, May 10, 2018.

  1. capn_pineapple

    capn_pineapple Active Member

    Joined:
    Aug 28, 2013
    Messages:
    356
    Likes Received:
    80
    #1
    Last edited: May 10, 2018
  2. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,741
    Likes Received:
    433
    Sadly no numbers about performance, especially for 4k random read or writes @ 1qd/1thread :/
     
    #2
  3. matthelm

    matthelm New Member

    Joined:
    Sep 19, 2013
    Messages:
    10
    Likes Received:
    0
    Or cost or shipping dates. :-(
     
    #3
  4. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,770
    Likes Received:
    863
    I have my doubts that this will conquer the 'king of ZIL'...AKA Optane
     
    #4
  5. zir_blazer

    zir_blazer Active Member

    Joined:
    Dec 5, 2016
    Messages:
    233
    Likes Received:
    74
    What is the expected use case for these? What can be small enough to fit in a RAMDisk, be random I/O intensive and require unlimited endurance? I can think only about a small scratch disk, or as a buffer for heavy logging before being copied sequencially to a HD or so. The whitepaper mentions logging and caching.
    RAMDisk performance is heavily capped if you use them via an interface that isn't the standard memory Bus, so from a performance standpoint, a Hardware RAMDisk capped to NVMe is far inferior to a Software RAMDisk that uses system RAM. The only thing that makes these useful is when you need to treat them as persistent storage due to the battery. However, since now they would be competing against Optane, that significantly increased random I/O performance compared to standard NAND, the gap where the cost/size ratio and volatile inconvenience may justify those Hardware RAMDisk Devices is even less than before, and with RAM prices sky high... Why bother?

    In a virtualized environment, you can actually use a non-volatile Software RAMDisk by creating them in the host and passing them to the VM as a raw block device with a paravirtualized controller, so for as long that you don't reboot the host, you can reboot the guest and still have the data in the RAMDisk. However, the paravirtualized controller and the File System significantly blows the performance. I still didn't figure out if you can use some Direct I/O (Avoids File System overhead) RAMDisk type, but seems than that would require to create it inside the VM with standard RAMDisk Software...
     
    #5
  6. nk215

    nk215 Active Member

    Joined:
    Oct 6, 2015
    Messages:
    313
    Likes Received:
    92
    Can you point me to a way to create ramdisk on ESXi host and use that ramdisk to host a data store? I don't think it's possible on esxi. The work around is to create a ramdisk on a VM then serve it back to the host to put datastore on. That creates a bunch nore unneeded traffics.

    This is all for experiment only. Right now I am happy with ramdisk inside the guest and use it for the guest only.
     
    #6
  7. zir_blazer

    zir_blazer Active Member

    Joined:
    Dec 5, 2016
    Messages:
    233
    Likes Received:
    74
    I don't use ESXi, and know little about it. I use QEMU-KVM with an Arch Linux host. There are two ways to create a RAMDisk in Linux.

    The most know one is to use Linux tmpfs File System (There is another one called ramfs but tmpfs is supposedly better):

    mount -t tmpfs -o size=20G tmpfs /root/vms/ramdisk

    Directory has to exist previously.
    You need to create a VM image file (Think on raw or qcow) that is inside of that location, and modify your VM config file to add it as another drive.


    Another alternative is to use RAM as a Block Device. It requires to load a rather unknow Kernel Module:

    modprobe brd rd_nr=1 max_part=1 rd_size=20971520

    This creates the /dev/ram0 location, which you pass to the VM as if you were feeding it an entire HD (/dev/sda), partition (/dev/sda2), or LVM volume (/dev/mapper/vg0-lv0 or something like that).
    The brd Kernel Module is rather inflexible since you can't individually create different sized RAMDisks, rd_nr only allows you to create multiple RAMDisks of rd_size.


    I never bothered to benchmark, but the raw Block Device theorically should be faster. Yet another alternative would be to create a RAMDisk with either, but use some Software like samba to use those locations as shared directories. You would remove the overhead from the IDE/AHCI/SCSI/LightNVM/VirtIO Block Device Controllers and pass it to the VirtIO NIC instead. But this maybe omits a File System layer.
     
    #7
    nk215 likes this.
  8. Stux

    Stux Member

    Joined:
    May 29, 2017
    Messages:
    30
    Likes Received:
    10
    Sounds like you just described a ZFS SLOG device. Except its write-almost-always.

    They key with a ZFS log is it needs to finish writing super-duper-quick and return.
     
    #8
    _alex and aero like this.
  9. capn_pineapple

    capn_pineapple Active Member

    Joined:
    Aug 28, 2013
    Messages:
    356
    Likes Received:
    80
    Which at very high networking speeds is exactly what you want it for (as I originally stated), latency for RAM is still at least an order of magnitude faster than Flash/Optane

    The only downside I see to this is that the payload size is severely limited 128B or 256B configurable, but for ZIL/SLOG this is a fantastic solution.
     
    #9
    Stux likes this.
  10. psannz

    psannz Member

    Joined:
    Jun 15, 2016
    Messages:
    43
    Likes Received:
    9
    To be honest, I don't see 16GB as big enough for ZFS. Unless you reduced the flush time from the default 5 seconds to 2-3s. Or went with bigger module sizes.
    As it stands, those 16GB are maxed out after 6s of 2x10 Gbit writes.
     
    #10
  11. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,741
    Likes Received:
    433
    The questions is how do you use the slog... For all writes(=sync = always)? For sync writes only (small iops?)?
    Depending on how you answer the question the 16gb can be sufficient.
     
    #11
Similar Threads: Viking RAM-Drives
Forum Title Date
Hard Drives and Solid State Drives New U.2 NVRAM NVMe SSD (Radian formerly Viking)? Aug 18, 2019

Share This Page