current king of slog devices?

Discussion in 'Hard Drives and Solid State Drives' started by Rand__, Jan 30, 2019.

?

best slog Q1/2019?

  1. Optane 4800x

    10 vote(s)
    62.5%
  2. Flashtec

    0 vote(s)
    0.0%
  3. NvDimm-n

    4 vote(s)
    25.0%
  4. others (pls comment)

    2 vote(s)
    12.5%
  5. NvDimm-P

    0 vote(s)
    0.0%
  1. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    Hi,

    Does anyone know which is the current king of slog speed?

    The flashtec is rather small (max 16gb i think) which might be on the low side for 40+ gigabit networking, also rather old (2015).
     
    #1
  2. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    786
    Likes Received:
    278
    Optane, depending on your use case the 32GB module may be enough. For heavy use, go for 900p+++.
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,273
    Likes Received:
    752
    with NVMe:
    Optane 4800 if you need guaranteed powerloss protection,
    Optane 900/905 if a propably very good behaviour on a powerloss is sufficient

    If you want hotplug (dualpath SAS 12G, best for cluster or ESXi AiO where Optane can make problems)
    WD Ultrastar SS 530 (10dpwd, 400GB model, 320k write iops 4k)
     
    #3
  4. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    786
    Likes Received:
    278
    Optane doesn't have powerloss protection, because it doesn't need it!
     
    #4
  5. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    Is optane faster than flashtec or nvdimm-n?
    From what I have seen on reviews those shoumd exceed 1GB/s while Optane will only go up to 800-900MB/s.

    O/c not looking at price/perf ratio, only ultimate performance for now:)
     
    #5
  6. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,576
    Likes Received:
    4,521
    NVDIMM-N would be my sense. DRAM is still faster. On the flip side, NVDIMM-N is not supported on all platforms while NVMe support is generally good.
     
    #6
    Patriot and T_Minus like this.
  7. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    Is it not?
    I know -P is not yet but i thought -N was fairly standard. Or do you mean OS support?
     
    #7
  8. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    i would love to see a test between NVDIMM-N and optane as slog in freenas.
    When i eventually upgrade home servers, I want to get a few modules of NVDIMM-N. I believe ESXI host is already built to support it. My plan was to pass through some NVDIMM-N as storage device to FreeNas VM and use it as slog. Only thing was the server priced out from dell was like 20k and i wasnt sure it would even work.
     
    #8
  9. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    1,080
    Likes Received:
    356
    My take on NVDIMM for home devices (which in my lexicon means <dozen small-scale servers, not those of you with a microgoogle in your cellars ;)) is that generally you'd likely want to populate the memory slots with memory before using something like an NVDIMM, whereas PCIe lanes for NVME devices are already commonplace and only likely to get more so as time goes on. As Patrick mentions, hardware and OS support for NVDIMM is still in the teething stages.

    If we're talking SLOG and a single optane device tops out at 800MB/s for sync writes then TTBOMK you could just add another SLOG device and ZFS would use both devices (effectively "striped" in non-technical terms) and you'd then get at least 1.5GB/s sync write performance (haven't tested this myself or read up closely on it so please do correct me if I'm wrong).

    That said I have difficulty imagining a workload at home that'd require sustained IO performance at north of 1GB/s and especially not 4GB/s, but like I say I'm small scale - my only optane device currently in use is a comparatively titchy 64GB M10. My £0.02, YYMV, warranty void in the event of me not being a wizard, etc.
     
    #9
    Last edited: Jan 30, 2019
  10. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,683
    Likes Received:
    412
    Nvdimm (dram based) > flashtec (dma mode ~10m iops, dram based) > nvdimm (3dxpoint) > flashtec (nvme mode) > 3dxpoint (nvme) > nvme ssd > sas ssd > sata ssd > sas hdd > sata hdd :D
     
    #10
    Stephan, psannz and Rand__ like this.
  11. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,273
    Likes Received:
    752
    Tell this to the marketing/sales or the technical staff at Intel.
    One of them does not believe and confirm.

    no need to discuss that RAM is faster than Flash.
    But "King of Slog" must care about availability and relation of price vs performance.

    A Dram based Slog attached over 12G SAS or even Sata would be faster than Flash over NVMe.
    Slog mainly depend on latency and steady write iops. The interface is not so relevant (no need to discuss that Dram via Dram interface is fastest). ZeusRAM was Dram over SAS but with technology years ago.
     
    #11
    Last edited: Jan 30, 2019
    T_Minus likes this.
  12. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,273
    Likes Received:
    752
    The first "king of Slog" was SSD in general with the ZeusRam for those with money,
    Then we saw the Intel DC 3700 with the P3700 for those with money
    Now we see the Optane 900 with the 4800 for those with money

    Ram based options are exotic until now.
    12G SAS based ones for those who need clustering or hotplug.
     
    #12
  13. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    Picked up a bunch of nvdimms over the flashtec... now need to find some cheap 'PowerGems' to go with them;) Then we will see how the optane holds up...
     
    #13
  14. oxynazin

    oxynazin New Member

    Joined:
    Dec 10, 2018
    Messages:
    29
    Likes Received:
    11
    Can I just add 4x16GB NVDIMMs for SLOG? From NVDIMM Cookbook - SNIA: NVDIMMs and normal DIMMs may be mixed in the same channel. But how it will appear in Linux? As a separate pmem device? Have anybody experience with NVDIMMs?
     
    #14
  15. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    You will need OS support to manage the p-dimm device... also potentially a bbu.
    in 4 weeks i might have more details ;)
     
    #15
  16. oxynazin

    oxynazin New Member

    Joined:
    Dec 10, 2018
    Messages:
    29
    Likes Received:
    11
    Great, will wait :)
    As I understood (after fast googling about tech) for NVDIMM you need not only OS support (linux has), but also BIOS support, and may be (not sure) CPU support.
    Btw, what is the model of your NVDIMMs? I see on ebay 16gb microns, and my mb must have support (from its specs, X11DPG-QT), but, I cannot find PowerGEMs and cables for them, so at this point I will save some bucks :)
     
    #16
  17. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    True. For Supermicor I have been told all X10 and newer platforms should have support btw.
    And not sure whether you actually need PowerGems (or a backup power source) to run those. They likely weill work as regular memory but can't work as persistent device o/c. I assume they will not work properly then (as they will communicate with the OS)
     
    #17
  18. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,592
    Likes Received:
    544
    Just as an update -
    without backup power source they only work as regular dimm module
     
    #18
  19. Terry Kennedy

    Terry Kennedy Well-Known Member

    Joined:
    Jun 25, 2015
    Messages:
    1,021
    Likes Received:
    474
    Indeed. But they also have a vested interest in having enterprise customers pay a premium for the feature. See more below.
    Based on the technology (no need to erase whole blocks / do garbage collection / keep block pointer tables updated), the window of time between when the host sends data to the SLOG and when the SLOG commits it to persistent storage will be quite a bit shorter than for traditional flash. With redundant power supplies (920SQ) and a 6KW UPS, I don't think there is much risk in my application. If I were a company handling people's banking records I'd buy the enterprise version, though. But for what I'm doing, the 280GB 900P (SSDPED1D280GASX) is just fine.
     
    #19
  20. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,273
    Likes Received:
    752
    This is what I do in my own setups (declare 900P as uncritical and good enough for sync write) but on a critical production setup you need the guarantee of Intel that the Slog is powerloss protected.
     
    #20
Similar Threads: current king
Forum Title Date
Hard Drives and Solid State Drives What is the current best deal on used spinning enterprise drives. Oct 29, 2019
Hard Drives and Solid State Drives Anyone Raiding current SATA III SSD drives? Feb 20, 2012
Hard Drives and Solid State Drives Help looking for firmware utility for Toshiba HK4R THNSN81Q92CSE Dec 4, 2019
Hard Drives and Solid State Drives New U.2 NVRAM NVMe SSD (Radian formerly Viking)? Aug 18, 2019
Hard Drives and Solid State Drives "shucking" external hdds always cheaper than bare drives? Jul 10, 2019

Share This Page