Oracle Solaris 11.4

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by gea, Jan 3, 2018.

  1. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    my fault (I am not a native speaker)
    Oracle may be the first to add it outside Delphix

    But as Solaris already has the fastest resilvering, it may be indeed faster.
     
    #41
    gigatexal likes this.
  2. brutalizer

    brutalizer Member

    Joined:
    Jun 16, 2013
    Messages:
    54
    Likes Received:
    11
    Hmmm... Ok, it seems that device removal targets a whole vdev? So you cannot shrink an raidz2 to use one less disk? You remove the whole vdev? We will see when more information is out.
     
    #42
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Removing a disk from a Raid-Z or changing Raid-Z level is the trouble, complex and time critical operation nobody wants to implement. Removing a whole vdev or adding a disk to a vdev is less complicated and critical. Both combined can give nearly the same result as a vdev shrink or vdev conversion (Z1->Z2): http://open-zfs.org/w/images/6/68/RAIDZ_Expansion_v2.pdf
     
    #43
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    The new OmniOS is the first Open-ZFS storage distribution to include a vdev remove (Pool shrink).
    Oracle Solaris 11.4 also comes with this feature but it seems with less restriction.

    Open-ZFS, at least currently lacks the support for a vdev remove of a basic or mirror vdev when a raid-z [1-3] vdev is part of the pool or a remove raid-Z[1-3] at all or add a raid-z [1-3] after a remove of ex a basic/mirror vdev what limits its use cases. Support of raid-Z [2-3] is expected in Open-ZFS (but not Z1), Bug #7614: zfs device evacuation/removal - illumos gate - illumos.org

    Open-ZFS ex OmniOS that is the first to include this feature also requires a re-mapping table with a continous small RAM need/reservation and small performance degration.This is listed in the output of a zpool status. A manual zpool remap can fix this.

    It seems that Solaris 11.4 does not have these restrictions
    vdev removal, poolwide checkpoints/snaps or imp... | Oracle Community
     
    #44
    Last edited: Apr 12, 2018
    T_Minus likes this.
  5. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    To ME, this ZFS shrink. as currently implemented is beyond worthless other than to un-do adding a single vdev somewhere by accident.

    as previously said.. mirrors and raidz are excluded.. how many run basic pools?

    additionally... ZFS's greatest feature is that EVERYTHING is checksumed (unless you specifically turn it off) so that no data operation can ever result in a data corruption and every read is a data checksum check on that data..

    well as I read it.. this removal process is NO CHECKSUMED... no thanks...

    from the link

    This project allows top-level vdevs to be removed from the storage pool with “zpool remove”, reducing the total amount of storage in the pool. This operation copies all allocated regions of the device to be removed onto other devices, recording the mapping from old to new location. After the removal is complete, read and free operations to the removed (now “indirect”) vdev must be remapped and performed at the new location on disk. The indirect mapping table is kept in memory whenever the pool is loaded, so there is minimal performance overhead when doing operations on the indirect vdev.

    The size of the in-memory mapping table will be reduced when its entries become “obsolete” because they are no longer used by any block pointers in the pool. An entry becomes obsolete when all the blocks that use it are freed. An entry can also become obsolete when all the snapshots that reference it are deleted, and the block pointers that reference it have been “remapped” in all filesystems/zvols (and clones). Whenever an indirect block is written, all the block pointers in it will be “remapped” to their new (concrete) locations if possible. This process can be accelerated by using the “zfs remap” command to proactively rewrite all indirect blocks that reference indirect (removed) vdevs.

    Note that when a device is removed, we do not verify the checksum of the data that is copied. This makes the process much faster, but if it were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be possible to copy the wrong data, when we have the correct data on e.g. the other side of the mirror. Therefore, mirror and raidz devices can not be removed.


    that last paragraph should scare the shit out of you. provided I am reading it right and that is what he meant to say..

    Bug #7614: zfs device evacuation/removal - illumos gate - illumos.org
     
    #45
    gigatexal likes this.
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    While the current Open-ZFS implementation of vdev remove from Delphix supports already mirrors I also see only limited use cases due not working with raid-Z in any form, the optimisation on performance and not security and the permament ram based remapping. While it seems that you can limit the last with a zfs remap filesystem (this is not poolbased but filesystem based) the Solaris implementation seems superiour.

    For the Open-ZFS implementation there is still some work needed.
     
    #46
  7. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Oracle Solaris 11.4 Beta Refresh is now available
    Solaris 11.4 Beta Refresh Now Available | Oracle Community

    napp-it 18.06 dev (apr.18) supports this new beta
    - Disks > Location and Disks > Map supports multiple mixed SAS2/3 HBAs
    (you must recreate maps with 18.06)
    - vdev remove (Unlike Open-ZFS all vdev types like basic, mirror and Raid-Z[1-3] are suppoprted)
    vdev remove is a new Solaris feature of ZFS pool v.44
     
    #47
    Last edited: Apr 18, 2018
  8. brutalizer

    brutalizer Member

    Joined:
    Jun 16, 2013
    Messages:
    54
    Likes Received:
    11
    Yes, it seems that it the only use case if you accidentally add a disk to a zpool. Then you can un-add it.
     
    #48
  9. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Unlike the current Open-ZFS implementation in OmniOS or OpenIndiana, the Solaris 11.4 zpool remove vdev with ZFS pool v.44 has no problems with removing Raid-Zn vdevs or adding Raid-Zn to a pool where a vdev was removed.
     
    #49
    Last edited: May 8, 2018
  10. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    63
    Likes Received:
    11
    I was impressed with gea's words regarding 11.4 beta release, so I decided to move my home NAS from FreeNAS to Solaris 11.4.
    Of course, this was not an urgent need, simply because sometimes everything is boring. My job is not IT related, so it's akin to a fun hobby.

    My old ZFS pool was encrypted by FreeNAS 11, so it was a trouble to decrypt it and import it to Solaris (at least i think so), so i decide to backup all data to external HDD's and build new RAIDZ2 on Solaris. I follow gea instructions, installed beta repository and installed napp-it, RAIDZ2 was build and encrypted from napp-it GUI.

    NAS hardware:
    Supermicro X10SRM-F
    E5-2620 V4
    64Gb ECC Reg
    HBA Supermicro AOC-S2308L-L8e (only one thing i added between FreeNAS and Solaris)

    So at this moment i have RAIDZ2 pool from 8x3Tb HDD, Solaris installed on 20Gb SSD, around 4Gb left.
    Right now NAS connected to network switch with single 1Gb link via onboard i350 adapter.
    ZFS encryption enabled.

    But i have perfomance issues over SMB from my Windows 10 computers. After ZFS pool was build and i shared it, all my 8Tb of data was upoloaded over LAN from my external HDD's. Everything went smooth, sustained 115 mb/sec over 1Gbit link for few days.

    But now if i copy something to NAS i get something like this:
    FileCopy.jpg
    Or this:
    FileCopy2.jpg

    Both images indicate ~4Gb file copy over SMB to NAS.

    I do not see anything suspect at Solaris Dashboard or napp-it web GUI.

    Meanwhile, if i opening MKV file from NAS in my player - i also notice ~5 sec freeze before movie start, every rewind also cause a short freeze.

    I don't understand where i should start my searching..
    Maybe anyone can give me any advice?
    Thank you anyway.
     
    #50
  11. m4r1k

    m4r1k Member

    Joined:
    Nov 4, 2016
    Messages:
    39
    Likes Received:
    4
    Keep an eye on this thread.
    Solaris 11.4 Beta: ZFS sudden performance drop ... | Oracle Community
    It seems current Solaris 11.4 has I/O perf degradation. On napp-it try running the benchmark and see the performance bypassing the smb later.

    Given how many people recently got laied off in Solaris team, 11.4 even when GA will need quite a few RSU before one can consider it really stable.
    Of course if you can have a full data backup it’s a bit different cause you got the freedom w/o risk :)
     
    #51
    Boris likes this.
  12. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Current Solaris 11.4 beta3 comes with a new ZFS release to support vdev remove so performance may not be optimized.

    What you can do is to identify to problem with
    - check disk subsystem ex by a napp-it Pool > Benchmark
    Have a look at disk iostat on all disks whether load is equal or one disk is weaker

    - check network via iperf3 ex to Windows

    - check SMB with a video test application like AjA
    AJA System Test: Drive Performance Stats You Can TrustAccurately test your system performance using System Test, available in free retail software builds for KONA, Io and T-TAP products
    Use 4k and RGB

    - create an unencrypted filesystem and compare
     
    #52
    Boris likes this.
  13. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    63
    Likes Received:
    11
    Thank you for reply, @m4r1k.

    Thank you, @gea. I am trying to perform iozone 1g, but after i disabled caching it's took whole day and still "running".

    As i guess there is no way to downgrade ZFS version and install 11.3, till 11.4 get healthy?
     
    #53
  14. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    - If you disable caching, any ZFS will become extremely slow as this is what ZFS makes fast despite all the security options that are only possible with a performance price. (more data due checksums, double metadata, higher fragmentation due CopyOnWrite). So disable only for some tests where you want to compare results without RAM caching influence. Under working conditions never disable.

    -Solaris 11.4 beta3 comes with a newer ZFS version 44 that cannot be imported in Solaris 11.3
     
    #54
  15. m4r1k

    m4r1k Member

    Joined:
    Nov 4, 2016
    Messages:
    39
    Likes Received:
    4
    @Boris you may want to try with yesterday 11.4 RC. Maybe it will fix the issue. Also posting on the Orange forum might be a good idea
     
    #55
    Boris likes this.
  16. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    63
    Likes Received:
    11
    @m4r1k, updating now.
    Which "Orange forum" you mean? Never heard about it...

    Upd: Can't update due insufficient disk space... Probably cache or something else eat my disk space. As i wrote before - Solaris installed at 20Gb SSD, last time i check it was around 4Gb left, now only 161Mb...
     
    #56
  17. m4r1k

    m4r1k Member

    Joined:
    Nov 4, 2016
    Messages:
    39
    Likes Received:
    4
    Sorry, I meant Oracle Solaris forum, the one from the link I sent before.

    It may be the Solaris web dashboard.

    Check the space using
    # zfs list rpool/VARSHARE/sstore
     
    #57
  18. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    There is a refresh of the Solaris 11.4 beta
    Oracle Solaris 11.4 Open Beta Refresh 2

    The Orange forum may be the beta forum?
    Space: Solaris Beta | Oracle Community

    20GB is ok for installation but as you need space for the swap and dump device, see Planning for Swap Space - Oracle Solaris Administration: Devices and File Systems and as on updates the old system remains available as a bootenvironment you need place for the new system so 20GB is quite low.

    You may check if there are former bootenvironments to delete otherwise reinstall completely.
     
    #58
    Boris likes this.
  19. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    63
    Likes Received:
    11
    rpool 17.8G 161M 4.33M /rpool
    rpool/ROOT 5.28G 161M 31K none
    rpool/ROOT/be-name 270K 161M 2.60G /
    rpool/ROOT/be-name/var 106K 161M 178M /var
    rpool/ROOT/pre_napp-it-18.01free 46.8M 161M 3.35G /
    rpool/ROOT/pre_napp-it-18.01free/var 1K 161M 290M /var
    rpool/ROOT/solaris 5.24G 161M 4.47G /
    rpool/ROOT/solaris/var 622M 161M 493M /var
    rpool/VARSHARE 1.61G 161M 1.42G /var/share
    rpool/VARSHARE/kvol 27.7M 161M 31K /var/share/kvol
    rpool/VARSHARE/kvol/dump_summary 1.22M 161M 1.02M -
    rpool/VARSHARE/kvol/ereports 10.2M 161M 10.0M -
    rpool/VARSHARE/kvol/kernel_log 16.2M 161M 16.0M -
    rpool/VARSHARE/pkg 63K 161M 32K /var/share/pkg
    rpool/VARSHARE/pkg/repositories 31K 161M 31K /var/share/pkg/repositories
    rpool/VARSHARE/sstore 172M 161M 172M /var/share/sstore/repo
    rpool/VARSHARE/tmp 31K 161M 31K /var/tmp
    rpool/VARSHARE/zones 31K 161M 31K /system/zones
    rpool/dump 6.90G 163M 6.90G -
    rpool/export 63K 161M 32K /export
    rpool/export/home 31K 161M 31K /export/home
    rpool/swap 4.00G 162M 4.00G -

    Dump ~7Gb, Swap ~4Gb
     
    #59
  20. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    63
    Likes Received:
    11
    I will get my Intel S4500 960Gb on Friday, probably easiest way to just wait and make fresh install after.
     
    #60
Similar Threads: Oracle Solaris
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 and Intel X552/X554 10GbE drivers May 21, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 Oct 28, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 beta Jul 19, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle rumours Dec 4, 2016
Solaris, Nexenta, OpenIndiana, and napp-it An Oracle ZFS Storage All in One Appliance for your Home Lab Nov 28, 2015

Share This Page