Got an Optane and want to get the most out of it

Discussion in 'NAS Systems and Networked Home and SMB Software' started by bateau, Jun 5, 2018.

  1. bateau

    bateau Member

    Joined:
    Jan 22, 2017
    Messages:
    38
    Likes Received:
    9
    Hi,
    for no other purpose than just looking how far I can go, I decided to build a fast NAS, mainly for a sequential write situation (backup, 'archive' file server). Just want to see how fast I can do my backups. This means that I am looking for configurations where I can use fast storage as a big write-backed cache. And I am willing to sacrifice protection against power cuts a little bit, as my data is not hugely critical, and I have an elaborate UPS setup. I can accept bad files after a power cut, just not bad file systems. I am willing to spend some time setting it up, but after that it must be set-and-forget.

    This is what I have: a couple of Xeon D servers with ESXi (6.5U2 atm), Optane 900P, various Samsung 960 Pro M.2 NVMe drives, a SATA HBA capable of pass-through, some SATA SSDs (Intel S3610), some large spinning drives, and a 10G network of course via some Cisco switches.
    The NAS should run as a VM, use the spinning drives as bulk storage, and get the most out of the various flash components I have.

    I like the idea of the ZIL/SLOG of ZFS, but the repetitive stories of it trading in performance for security put me off a bit, and benchmarks I ran confirmed that.
    I then tried various write-backed cache strategies on a standard Linux NAS packaging (OpenMediaVault), but they all failed, as OMV does not seem to like interference by others on the array configuration. And every time I look, all the write-backed stuff seems a bit dated and not always well integrated.
    And I hear good stories about Win2k16. Haven't tried that yet.

    Optane does fly as a local drive on a recent Linux kernel (4.15+), when you get the correct scheduler. Older kernels lag a bit.

    My question: Any recommendations on how to get write-backed cache working on linux on a recent kernel, that is stable and capable of profiting from an Optane? Or should I try other directions?
     
    #1
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,362
    Likes Received:
    294
    Do you have a performance goal ? And what kind of backups are we talking? Are the source drives able to provide enough transfer speed?
     
    #2
  3. bateau

    bateau Member

    Joined:
    Jan 22, 2017
    Messages:
    38
    Likes Received:
    9
    Goal? not really, other than I would like to fill 10gbps, but I know my Xeon D processors will probably have some problems with that.
    The backups that would need speed are VM images and 'regular' Linux and BSD directory dumps. Right now that is mostly up to 200GB per burst. Having had bad experiences with VM snapshot based backups (clusters and distributed databases hate that, and that is exactly what I run), I prefer scripted full dumps that sometimes introduce downtime, hence the desire to keep that copy time/downtime as short as possible.
    The source drives are fast enough, all being NVMe (mostly Samsung 960 Pro right now), but are rather full and not resilient enough to use as regular full image staging area before copy to a slower NAS.
     
    #3
  4. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,362
    Likes Received:
    294
    I personally would recommend a ZFS filer regardless of your bad benchmarks...
    1. You should be able to get 500 MB/s+ with optane slog and spinners which should be ok for backup
    2. I have not found found a solution to share out nvme speed over the network. Although, truth be told, I usually looked at resilient HA storage and not at single host boxes so I am not an expert in that particular area.

    3. You can run the ZFS filer via virtual disk (with a 16 or 32 GB slice as slog) and then still have a go at optimizing another setup while having a viable interims solution
     
    #4
    Last edited: Jun 6, 2018
  5. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    836
    Likes Received:
    83
    For sequential workload most caching solutions will just skip and bypass to the spinners if not configured in another way.
    But there is a reason for the defaults, a certain amount of spinners will be faster than the cache device, and even if not the cache must be flushed to disk, where they become the bottleneck again.

    You can have a look at bcache, enhanceio and similar for Linux.

    So, if you want to get most out of your optane use it for something else than caching seq. workloads ;)
     
    #5
Similar Threads: Optane
Forum Title Date
NAS Systems and Networked Home and SMB Software Synology RS3617xs+ optane support? Jan 26, 2018

Share This Page