Oracle Solaris 11.4

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by gea, Jan 3, 2018.

  1. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    Ok, i did fresh install of Beta2 on new disk, everything going smooth so far, playback start pretty quick and rewind is not cause freezes. Uploaded around 150Gb at sustained 112 Mbyte/sec.
    Will see and report later.
     
    #61
  2. m4r1k

    m4r1k Member

    Joined:
    Nov 4, 2016
    Messages:
    44
    Likes Received:
    5
    Keep us posted. My feeling is that 11.4 GA is gonna be quality-wise at least a step down compared to older releases due to the massive lay off (also ZFSSA team has been fully shutdown) but I’m very happy to be wrong.
     
    #62
  3. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    upload_2018-7-23_17-14-58.png
    Not lasted long...
    This is from my computer, from wife's computer same.
    Throughput drop, freeze, throughput recovery and once again...
     
    #63
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    Is this reading from NAS or writing to NAS?

    For writing its the effect of the rambased writecache where all writes go to RAM for around 5s with wirespeed, then flushed to disk while performance must go down unless a pool is faster than wirespeed and ramcache is twice the amount of a single flush = around 10s of writes. Especially when RAM is quite low or pool is not that fast, this effect is noticeable and normal for ZFS.

    Solaris behaves different to Open-ZFS regarding this. Solaris use a writecache with a flush to pool every 5s while Open-ZFS initiates a flush when a defined ramcache ex 4GB is full. This gives a better performance for Solaris during shorter writes while Open-ZFS gives a more linear write performance (although slower on short writes)
     
    #64
  5. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    This is write to NAS.

    But read from NAS also worst. As you can see most the time throughput are 50 Mbyte/sec or below. 49Gb file transfer.
    upload_2018-7-23_19-32-58.png


    According to "Disk I/O and activity last 10s" one disk have abnormal busy rating. And it's not first time i see it. Or it's may be normal?
    c0t50014EE2614FDB0Fd r: 34, wr: 0, w: 0%, b: 7%
    c0t5000CCA25DDFAB60d r: 32, wr: 0, w: 0%, b: 7%
    c0t50014EE2061E4C7Ed r: 36, wr: 0, w: 0%, b: 8%
    c0t50014EE6598D7D63d r: 38, wr: 0, w: 0%, b: 8%
    c0t50014EE2B0DAE4DEd r: 1, wr: 0, w: 0%, b: 86%
    c0t50014EE2B0E01F06d r: 35, wr: 0, w: 0%, b: 8%
    c0t50014EE0585C2728d r: 38, wr: 0, w: 0%, b: 9%
    c0t50014EE0ADB0FC2Ed r: 38, wr: 0, w: 0%, b: 10%

    Could you please help me with how to test a single disk under Solaris?
     
    #65
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    A raid-array is as slow as the slowest disk. Iostat load should be quite similar over disks so I would expect a bad disk. If you have another disk, replace this one.

    To check a single disk you can do a smart check of that disk or create a single disk pool from that disk and compare with another single disk pool from another disk.

    I would usually remove that disk and do a lowlevel diskcheck ex via WD data lifeguard, Software and Firmware Downloads | WD Support
     
    #66
    Boris likes this.
  7. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    Thank you for reply, @gea, right now i pick long smart test, which should take 512 minutes, according to smartctl.
    I have no replacement right now, should first be sure its disk failure.
     
    #67
  8. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    940
    Likes Received:
    320
    Don't rely on smart tests for this - I've seen plenty of discs that gave craptacular performance for reasons unknown, but smart would happily say everything was OK. With a busy rating like that you should replace first and test the disc later outside of the array.
     
    #68
  9. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    And my SMART test also said "everything OK" this morning...
     
    #69
  10. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    Looks like at least a part of my problem is faulty HDD, today i replaced it:
    scan: resilver in progress since Tue Aug 7 17:11:22 2018
    11.4T scanned
    569G resilvered at 769M/s, 39.10% done, 2h37m to go

    All disk showing high busy rate during scrubbing and resilvering. In past scrubbing goes at 25M/s speed and took few days to complete.
    c0t50014EE2614FDB0Fd r: 1547, wr: 14, w: 0%, b: 61%
    c0t5000CCA25DDFAB60d r: 3120, wr: 12, w: 0%, b: 27%
    c0t50014EE2061E4C7Ed r: 702, wr: 14, w: 0%, b: 79%
    c0t50014EE6598D7D63d r: 1599, wr: 12, w: 0%, b: 60%
    c0t50014EE2B0E01F06d r: 1288, wr: 13, w: 0%, b: 72%
    c0t50014EE0585C2728d r: 1534, wr: 12, w: 0%, b: 66%
    c0t50014EE0ADB0FC2Ed r: 481, wr: 12, w: 0%, b: 84%
    c0t5000CCA269D4CEB3d r: 0, wr: 1862, w: 0%, b: 88%

    I will try some casual tests with huge file transfers over network after.
     
    #70
  11. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    85Gb was sent to NAS just fine. Will keep my eye on it.

    upload_2018-8-7_19-41-51.png
     
    #71
  12. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
  13. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    AiO with Solaris 11.4 on ESXi 6.7
    vmware-tools / open-vm-tools on 11.4b |Oracle Community

    My findings/ "just a hack"

    VMware vmtools for Solaris from ESXi 6.7
    Executing on a textonly setup of S11.4 final on ESXi 6.7

    Installer vmware-install-pl on 11.4 installs but fails with a message
    Package "SUNWuiu8" not found when executing vmtool/bin/vmware-config-tools.pl

    This can be skipped by editing vmtool/bin/vmware-config-tools.pl in line 13026
    when you comment out the check for SUNWuiu8

    When you then run vmtool\bin\vmware-config-tools.pl it hangs due a missing /usr/bin/isalist
    I copied isalist over from a Solaris 11.3, made it executable and then vmware-config-tools.pl works

    After a reboot I got the message vmools installed with a console message
    Warning: Signature verification of module /kernel/drv/amd64/vmmemctl failed

    same with verification of the vmxnet3s driver
    vmxnet3s reports deprecated "misc/mac"

    Not sure if this is critical

    vmxnet3s and guest restart from ESXi works

    Gea
     
    #73
  14. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    @gea, could you please help me.

    My current rpool contain single 960Gb disk.
    But only 47Gb used. I want to replace single 960Gb disk with 240Gb mirror.

    Is it work, if i just:
    1. add 240Gb disk to rpool mirror
    2. remove 960Gb disk from rpool mirror
    3. add another 240Gb disk to rpool for mirror

    I mean if disk used only for 47Gb Solaris let me to do it, or i should do disk resize first or something like that?

    Thank you in advance.
     
    #74
  15. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    You cannot add a smaller mirror to a larger disk. A vdev shrink is also not possible.
    If this is a standard setup, the fastest way is a clean reinstall and reconfigure then mirror the new rpool

    On current napp-it 18.09dev I have added a new function/menu: System > Recovery
    to make recovery easy. The idea behind is:

    Full Appliance Disaster/System Recovery

    To recover a fully configured appliance from a BE (bootenvironment):

    1. backup current BE: create a replication job (require 18.09dev) with current BE as source

    2. reinstall OS and napp-it

    3. restore BE: create a replication job with the BE as source and rpool/ROOT as target (require 18.09dev)

    4. activate the restored BE and reboot

    This BE backup/restore can be done also manuall via replication and zfs send
     
    #75
    Last edited: Aug 30, 2018
    Boris likes this.
  16. Boris

    Boris Member

    Joined:
    May 16, 2015
    Messages:
    65
    Likes Received:
    11
    Got it, thank you @gea.
     
    #76
  17. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    #77
  18. nezach

    nezach Active Member

    Joined:
    Oct 14, 2012
    Messages:
    158
    Likes Received:
    70
    #78
  19. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,097
    Likes Received:
    674
    I have done a clean reinstall.
    An update may be possible but it can be that you indeed need the newest 11.3 SRU but even then I would prefer a clean reinstall.
     
    #79
  20. nezach

    nezach Active Member

    Joined:
    Oct 14, 2012
    Messages:
    158
    Likes Received:
    70
    So i tried "pkg update" and it did not work. This is quite a bummer and makes Solaris even less attractive than before.
     
    #80
Similar Threads: Oracle Solaris
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 and Intel X552/X554 10GbE drivers May 21, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 Oct 28, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle Solaris 11.3 beta Jul 19, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Oracle rumours Dec 4, 2016
Solaris, Nexenta, OpenIndiana, and napp-it An Oracle ZFS Storage All in One Appliance for your Home Lab Nov 28, 2015

Share This Page