[Maybe a deal?] MicroSemi NV-1616 DRAM+Flash drives ($295 OBO)

Discussion in 'Great Deals' started by int0x2e, Sep 9, 2019.

  1. int0x2e

    int0x2e Member

    Joined:
    Dec 9, 2015
    Messages:
    48
    Likes Received:
    23
    Not 100% sure if this is a good deal or not, or how low the seller will go in BO, but sharing in case someone here can benefit.

    If I understood correctly, these are 16GB DRAM based drives with an 8x PCI-E 3.0 interface and an ultra-capacitor + mini-SSD for power-loss handling.
    I believe they would be nice for write-caches (ZIL/SLOG), given that this page indicates they should support 10M IOPS in direct-memory-mode and 1M IOPS in block-mode (Seems like it's up to you to configure a bunch of parameters that trade-off latency and throughput by changing the block-size, queue-depth, etc. - they show two extreme examples: a 5.1GB/s throughput with 320K IOPS and a 773uS latency and another extreme at 23MB/s throughput with 46K IOPS and 16uS latency).

    Drives:
    Microsemi NV-1616 EMC 16GB DDR MN NVRAM 1616 Card @ $295 OBO

    Capacitor modules:
    Microsemi SCM-F100 5.4V 100F @ $35
     
    #1
    Samir likes this.
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    Very nice.
    I picked one up to compare against nvdimms (non optane).
    I assume these here will be slower but potentially faster than P4800X.
    Should pose less issues than nvdimms due to nvme compatibility so might be a great solution for older boards...
     
    #2
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    So finally received my drive, delayed by courtesy for German customs...

    Faster than P4800X, slower than nvdimms, have not done any real life tests though
    Been detected natively in FreeNas as nvme device.

    Code:
    / diskinfo -citwS  /dev/nvd1
            512             # sectorsize
            16493051904     # mediasize in bytes (15G)
            32212992        # mediasize in sectors
            0               # stripesize
            0               # stripeoffset
            MTR_MLC_TS_16GB # Disk descr.
            0400001C3A46    # Disk ident.
            No              # TRIM/UNMAP support
            0               # Rotation rate in RPM
    
    I/O command overhead:
            time to read 10MB block      0.011780 sec       =    0.001 msec/sector
            time to read 20480 sectors   0.662114 sec       =    0.032 msec/sector
            calculated command overhead                     =    0.032 msec/sector
    
    Seek times:
            Full stroke:      250 iter in   0.014802 sec =    0.059 msec
            Half stroke:      250 iter in   0.015544 sec =    0.062 msec
            Quarter stroke:   500 iter in   0.026569 sec =    0.053 msec
            Short forward:    400 iter in   0.023112 sec =    0.058 msec
            Short backward:   400 iter in   0.020278 sec =    0.051 msec
            Seq outer:       2048 iter in   0.074929 sec =    0.037 msec
            Seq inner:       2048 iter in   0.086890 sec =    0.042 msec
    
    Transfer rates:
            outside:       102400 kbytes in   0.099942 sec =  1024594 kbytes/sec
            middle:        102400 kbytes in   0.093873 sec =  1090835 kbytes/sec
            inside:        102400 kbytes in   0.096388 sec =  1062373 kbytes/sec
    
    Asynchronous random reads:
            sectorsize:    798172 ops in    3.000061 sec =   266052 IOPS
            4 kbytes:      792316 ops in    3.000067 sec =   264099 IOPS
            32 kbytes:     461734 ops in    3.000803 sec =   153870 IOPS
            128 kbytes:    117465 ops in    3.002188 sec =    39126 IOPS
    
    Synchronous random writes:
             0.5 kbytes:    151.1 usec/IO =      3.2 Mbytes/s
               1 kbytes:    151.3 usec/IO =      6.5 Mbytes/s
               2 kbytes:    151.8 usec/IO =     12.9 Mbytes/s
               4 kbytes:    151.1 usec/IO =     25.9 Mbytes/s
               8 kbytes:    152.1 usec/IO =     51.4 Mbytes/s
              16 kbytes:    207.8 usec/IO =     75.2 Mbytes/s
              32 kbytes:    170.4 usec/IO =    183.4 Mbytes/s
              64 kbytes:    192.7 usec/IO =    324.3 Mbytes/s
             128 kbytes:    226.9 usec/IO =    550.9 Mbytes/s
             256 kbytes:    246.5 usec/IO =   1014.2 Mbytes/s
             512 kbytes:    298.6 usec/IO =   1674.3 Mbytes/s
            1024 kbytes:    370.8 usec/IO =   2696.9 Mbytes/s
            2048 kbytes:    559.3 usec/IO =   3576.2 Mbytes/s
            4096 kbytes:    963.8 usec/IO =   4150.1 Mbytes/s
            8192 kbytes:   1788.8 usec/IO =   4472.3 Mbytes/s
    
     
    #3
    Samir likes this.
  4. Javik50k

    Javik50k Member

    Joined:
    Oct 11, 2018
    Messages:
    32
    Likes Received:
    19
    So, if i'll use 1616 as a cache for SSD subsystem - will it work okay under Windows? Also, can i boot from cached drive (not card itself)?
     
    #4
  5. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    I think you have a misconception here - using a cache drive for whatever OS is not working like a SSHD or consumer optane cache drive.
    You will have 2 separate devices (cache and storage), you just define one as cache and (depending on the os) some/all writes/reads will go to it.

    Whether you can boot from the storage drive depends on the OS as well, if it needs exclusive access to the storage drive (eg Freenas) then no, if you just add a block file to any drive (eg Starwind) then yes.
     
    #5
    Samir likes this.
  6. Javik50k

    Javik50k Member

    Joined:
    Oct 11, 2018
    Messages:
    32
    Likes Received:
    19
    Maybe you right. I'll test and see improvements. Or won't see. NVRAM costs only 50EUR, so no big loss here. I'll just use it in my NAS then. Already have SSD cache there (2x Intel 3700 100GB in RAID0 on CacheCade 2.0. LSI9361-8i), but additional tier for fast small block operations will be useful.
     
    #6

Share This Page