CX3 iSER

Discussion in 'VMware, VirtualBox, Citrix' started by zxv, Mar 25, 2019.

  1. efschu2

    efschu2 Member

    Joined:
    Feb 14, 2019
    Messages:
    68
    Likes Received:
    9
    I would like to use scst (actually I would like to use LIO...), but havent found any good (working with ubuntu ootb w/o doing debugging all time long) ocf heartbeat script, so I'm left with tgt I guess...
    Maybe I will write one by my own, but well dunno when I have time to.
     
    #21
  2. efschu2

    efschu2 Member

    Joined:
    Feb 14, 2019
    Messages:
    68
    Likes Received:
    9
    #22
    markpower28 likes this.
  3. zxv

    zxv The more I C, the less I see.

    Joined:
    Sep 10, 2017
    Messages:
    152
    Likes Received:
    46
    Wow, nice.

    I agree about netplan. jumbo frames in particular are difficult to configure using netplan.

    The bandwidth tests show very smooth curves, which looks very encouraging to me. I take it as an indication that flow control is working very consistently.
     
    #23
  4. zxv

    zxv The more I C, the less I see.

    Joined:
    Sep 10, 2017
    Messages:
    152
    Likes Received:
    46
    I wonder, what is the most cost-effective kind of zil/slog device for high availability failover?

    I'm not yet doing HA, partly because by using kexec, reboots take around 20 seconds, which seems to be relatively travel free for both iscsi and NFS esxi clients. I'm interested in pursuing this and how to implement the zil/slog is the one thing I'm struggling with.
     
    #24
  5. efschu2

    efschu2 Member

    Joined:
    Feb 14, 2019
    Messages:
    68
    Likes Received:
    9
    I think there exists a dual ported 100GB DC P4800X
     
    #25
  6. mpogr

    mpogr Active Member

    Joined:
    Jul 14, 2016
    Messages:
    113
    Likes Received:
    75
    @efschu2 , the bench results at the end, you have 3 ATTO graphs, what's the difference between them?
    Also, why did you insist on sync=always? This totally destroys write performance for me. Below are my graphs with sync=standard and sync=always for comparison.
     

    Attached Files:

    • 1.png
      1.png
      File size:
      39.4 KB
      Views:
      12
    • 2.png
      2.png
      File size:
      38.7 KB
      Views:
      12
    #26
  7. efschu2

    efschu2 Member

    Joined:
    Feb 14, 2019
    Messages:
    68
    Likes Received:
    9
    Single runs with qd1, qd10, qd256 and a parallel run oft two VMs with qd256.
    If your data is important you MUST use sync=always, else you loose the data which is not written to disks in case of failure. Officialy Esxi and scst can bypass sync requests, but not every Software is handling critical data correctly with sync/cache flush - so if you know your Software is doing that correctly you can go for sync=standard and if you dont care about data integrety you can go for sync=disabled.
    To get good performance with sync=always you need consistant high write iops, eg a pool of enterprise ssds or add a good slog device like optane.

    Btw:
    If you see increased iops by sync=standard and atto bench with direct IO, this indicates your sync requests are not honored, so it acts like sync=disabled. Read carefuly about the following settings to be aware what is going on with your data:
    scst:
    write_through 1 vs 0
    nv_cache 1 vs 0
    fileio vs blockio
    zfs:
    sync always vs standard vs disabled

    Btw2:
    in my case sync=always is absolutly necessary, because if node1 dies for some reason, node2 takes over the pool and exports the target/lun, so my esxi vms does not even recognize that their storage has been offline for a short time and that the data cached in ram has not been written to persistant storage, so a running process (a database for example) "thinks" all data has been on disk, but it's not - probably this database would be corrupted

    if you dont plan HA, and your storage node dies - then your vms just crash - but that would not corrupt your data, your dataloss would "only" be the last txg sync commit - this is exactly what i do in my homelab - but forsure this is a no-go for production
     
    #27
    Last edited: Jul 12, 2019
  8. efschu2

    efschu2 Member

    Joined:
    Feb 14, 2019
    Messages:
    68
    Likes Received:
    9
    #28
  9. ASBai

    ASBai New Member

    Joined:
    Sep 23, 2019
    Messages:
    1
    Likes Received:
    0
    According to the ZFS doc I have seen:
    The standard behavior is enough to ANY Serious software such as nearly all modern database and file systems. Because any serious software will expect to ensure data has been written to the disk *ONLY* after the corresponding fsync (POSIX) / FlushFileBuffers (Windows) call returns.

    If a power failure or other failure occurs before fsync returns, the software will ensure data consistency through binlog rollback and other technologies when it is started next time (whether or not it is on the same node).

    This also applies to NFS: NFS's async mount also provides reliability guarantees for calls such as fsync:

    So ZFS's Standard and NFS's async options provide exactly the same consistency guarantees as the filesystems you use on your local disk (local disk writes are only placed in memory buffer before fsync is called too).

    Of course, you may lose the last transaction on the database or file system that has not been successfully committed (fsynced) in the event of a power loss, but there will be no data corruption. This is by desgin and cannot be avoided even if sync mode is turned on. And no doubt, the database or file system does not assume that the transaction has committed before the fsync call returns (they will not notify their user that the operation has completed successfully before fsync returns).

    Therefore, setting the sync mode is exactly the same as completely disabling the write buffer of the local disk, which is completely unnecessary for almost all serious modern software.
     
    #29
    Last edited: Sep 24, 2019
Similar Threads: iSER
Forum Title Date
VMware, VirtualBox, Citrix iSer Driver for VMWare 6.5 Sep 12, 2017

Share This Page