New build on barebone hardware

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by nerdalertdk, Jan 20, 2019.

  1. nerdalertdk

    nerdalertdk Member

    Joined:
    Mar 9, 2017
    Messages:
    53
    Likes Received:
    7
    Hi

    I'm in the progress of building my new VM "san"
    The hardware is an HPE DL20 gen9 with 16gb ram.
    Disks:

    Main storage is an 2TB NVME (All vm's is going to live there, the nvme have powerloss protection)
    plus 2 x 4TB SATA for backups and 10G links to hosts

    I'm going to use NFS as link to my vm hosts

    I have some questions about napp-it / ZFS (First use if zfs)
    Does it make sens to use napp-it for host os ?

    I want to be able to use snapshots on the nvme storage and maybe sync it to the backup drives. is this doable ?
    with an NVME as main storage do I need ARC,L2ARC,ZIL,SLOG (zfs is new to me so don't know all its bits)
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,058
    Likes Received:
    660
    First of all, you should know that napp-it is not an OS or OS distribution.

    It is a management application to build a webmanaged ZFS storage appliance ontop of a regular enterprise Unix Oracle Solaris or one of its free forks like OmniOS with a simple online installer. For OmniOS there is a ready to use ova template for ESXi to build a virtualized ZFS SAN or Cluster within minutes.

    Oracle Solaris, where ZFS comes from and where you get the genuine ZFS is still the fastest and most feature rich ZFS server, but not free. OmniOS, a free Solaris forks comes with Open-ZFS but with most of the Solaris advantages like best of all ZFS integration, kernelbased NFS and SMB and Comstar, the enterprise class iSCSI stack, see OmniOS Community Edition

    ZFS is the current filesystem with best of all data security. If you want performance out of it, you use RAM as read/write cache. If you want more readcache than you can add RAM, you can use an SSD als L2Arc to extent readcache. This is slower than RAM.

    On a crash, the content of the rambased write cache is lost. Beside a dataloss this can corrupt the filesystem of a VM. You can activate sync write to avoid this problem but the write performance can go down dramatically. An Slog can reduce this performance degration.

    For VM storage you want sync enabled and you want an Slog for performance reasons then. Prefer a small SSD for the OS, a pool from SSDs/NVMEs for VMs and a disk based pool for filer/backup.

    Features, see http://www.napp-it.org/doc/downloads/featuresheet.pdf
    Performance, see https://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
     
    #2
    Last edited: Jan 20, 2019
  3. nerdalertdk

    nerdalertdk Member

    Joined:
    Mar 9, 2017
    Messages:
    53
    Likes Received:
    7
    Hi

    Thanks for your answer, I am aware it’s not an OS the plan was to use the iso from Napp-it sites

    Since I’m space limited server is one 1u

    Can the slog be on the same NVMe as vm data ?

    Specs for my NVMe is about 6000MB/s read and 2200MB/s write
     
    #3
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,058
    Likes Received:
    660
    Slog on the same disk like data makes no sense.

    If you simply enable sync, onpool logging is the default. Cache protection/ sync logging is done on a ZIL called device then. You call this Slog when a seperate disk is involved.

    Relevant specs for a good slog (or a data disk well suited for sync write with an onpool ZIL)
    - powerloss protection
    - low latency
    - high steady write iops (4k)

    One of the best NVMe (Intel Optane) has 10us latency and 500k write iops (4k)
    One of the best SSDs (WD Ultrastar SS DC 530) is at around 26us latency and 320k write iops (4k)

    Sequential read/write is irrelevant
     
    #4
  5. nerdalertdk

    nerdalertdk Member

    Joined:
    Mar 9, 2017
    Messages:
    53
    Likes Received:
    7
    Like I sad, this is my first time with ZFS so really don't know all the terms. I've always been an hardware kinda guy :)

    My new server don't have that much space for storage. (one x pcie 3.0 x8 and two SATA 6G)
    Was hoping I could get by with an fast NVME with 1-2TB storage.

    So what you are saying, is I need to find an NVME with high steady write iops? (Optane is to expensive and don't have the space)

    The one I have been looking at only do 75000 IOPS 4KB Random Write, but do have an latency of 20 µs


    I'm guessing this NVMEwould be an better choice

    Max read 3174MB/s, Max write 2099MB/s
    max IOPS(4k) read 770,000 write 480,000
    stable IOPS(4k) read 770,000 write 220,000
     
    #5
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,058
    Likes Received:
    660
    Storage is defined by performance, capacity and price.
    Usually you must find a compromise where you set one aspect and optimize for the other two.

    What is your use case, capacity need and price range?
     
    #6
  7. nerdalertdk

    nerdalertdk Member

    Joined:
    Mar 9, 2017
    Messages:
    53
    Likes Received:
    7
    So use is my homelab running docker, Kubes and a couple of vm’s

    Needed space is around 1-2tb
    Price range is sub 500$

    I have been thinking about the Optane drives

    What I could do is buy an 4 x NVMe adapter but 2 x 1-2tb NVMe but them in an mirror raid and two Optane in mirrored as well

    That would give me the Optane slog power and keep the price down for n the NVMe (I think)
     
    #7
  8. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,058
    Likes Received:
    660
    -Build a mirror from two SSD or NVMe or a Raid-10 (performance scale over number of mirrors)
    - Add an Optane Slog (800P or 900P) for secure sync writes
    Required size for Slog is only 20GB, 800P and 900P are quite similar regarding perfornance, 900P allows more writes.

    -You do not need to mirror the Slog.
    -Adding a seperate pool from disks for filer/backup is fine
     
    #8
    nerdalertdk likes this.
Similar Threads: build barebone
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it New build and getting "Maintenance Mode" on install Jan 4, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Triton (SmartOS) / Plex / Low Power Media Server Build Dec 28, 2016
Solaris, Nexenta, OpenIndiana, and napp-it QA My Build Oct 6, 2016
Solaris, Nexenta, OpenIndiana, and napp-it Napp-it storageserver build examples May 29, 2016
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS build "ashift", how to test ashift for SSDs and which is better 512N or 4KN HDDs May 11, 2016

Share This Page