Oracle Solaris 11.4

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Boris

Member
May 16, 2015
86
14
8
Ok, i did fresh install of Beta2 on new disk, everything going smooth so far, playback start pretty quick and rewind is not cause freezes. Uploaded around 150Gb at sustained 112 Mbyte/sec.
Will see and report later.
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
Ok, i did fresh install of Beta2 on new disk, everything going smooth so far, playback start pretty quick and rewind is not cause freezes. Uploaded around 150Gb at sustained 112 Mbyte/sec.
Will see and report later.
Keep us posted. My feeling is that 11.4 GA is gonna be quality-wise at least a step down compared to older releases due to the massive lay off (also ZFSSA team has been fully shutdown) but I’m very happy to be wrong.
 

Boris

Member
May 16, 2015
86
14
8
upload_2018-7-23_17-14-58.png
Not lasted long...
This is from my computer, from wife's computer same.
Throughput drop, freeze, throughput recovery and once again...
 

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
Is this reading from NAS or writing to NAS?

For writing its the effect of the rambased writecache where all writes go to RAM for around 5s with wirespeed, then flushed to disk while performance must go down unless a pool is faster than wirespeed and ramcache is twice the amount of a single flush = around 10s of writes. Especially when RAM is quite low or pool is not that fast, this effect is noticeable and normal for ZFS.

Solaris behaves different to Open-ZFS regarding this. Solaris use a writecache with a flush to pool every 5s while Open-ZFS initiates a flush when a defined ramcache ex 4GB is full. This gives a better performance for Solaris during shorter writes while Open-ZFS gives a more linear write performance (although slower on short writes)
 

Boris

Member
May 16, 2015
86
14
8
Is this reading from NAS or writing to NAS?
This is write to NAS.

But read from NAS also worst. As you can see most the time throughput are 50 Mbyte/sec or below. 49Gb file transfer.
upload_2018-7-23_19-32-58.png


According to "Disk I/O and activity last 10s" one disk have abnormal busy rating. And it's not first time i see it. Or it's may be normal?
c0t50014EE2614FDB0Fd r: 34, wr: 0, w: 0%, b: 7%
c0t5000CCA25DDFAB60d r: 32, wr: 0, w: 0%, b: 7%
c0t50014EE2061E4C7Ed r: 36, wr: 0, w: 0%, b: 8%
c0t50014EE6598D7D63d r: 38, wr: 0, w: 0%, b: 8%
c0t50014EE2B0DAE4DEd r: 1, wr: 0, w: 0%, b: 86%
c0t50014EE2B0E01F06d r: 35, wr: 0, w: 0%, b: 8%
c0t50014EE0585C2728d r: 38, wr: 0, w: 0%, b: 9%
c0t50014EE0ADB0FC2Ed r: 38, wr: 0, w: 0%, b: 10%

Could you please help me with how to test a single disk under Solaris?
 

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
A raid-array is as slow as the slowest disk. Iostat load should be quite similar over disks so I would expect a bad disk. If you have another disk, replace this one.

To check a single disk you can do a smart check of that disk or create a single disk pool from that disk and compare with another single disk pool from another disk.

I would usually remove that disk and do a lowlevel diskcheck ex via WD data lifeguard, Software and Firmware Downloads | WD Support
 
  • Like
Reactions: Boris

Boris

Member
May 16, 2015
86
14
8
Thank you for reply, @gea, right now i pick long smart test, which should take 512 minutes, according to smartctl.
I have no replacement right now, should first be sure its disk failure.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
512
113
should first be sure its disk failure.
Don't rely on smart tests for this - I've seen plenty of discs that gave craptacular performance for reasons unknown, but smart would happily say everything was OK. With a busy rating like that you should replace first and test the disc later outside of the array.
 

Boris

Member
May 16, 2015
86
14
8
Don't rely on smart tests for this - I've seen plenty of discs that gave craptacular performance for reasons unknown, but smart would happily say everything was OK. With a busy rating like that you should replace first and test the disc later outside of the array.
And my SMART test also said "everything OK" this morning...
 

Boris

Member
May 16, 2015
86
14
8
Looks like at least a part of my problem is faulty HDD, today i replaced it:
scan: resilver in progress since Tue Aug 7 17:11:22 2018
11.4T scanned
569G resilvered at 769M/s, 39.10% done, 2h37m to go

All disk showing high busy rate during scrubbing and resilvering. In past scrubbing goes at 25M/s speed and took few days to complete.
c0t50014EE2614FDB0Fd r: 1547, wr: 14, w: 0%, b: 61%
c0t5000CCA25DDFAB60d r: 3120, wr: 12, w: 0%, b: 27%
c0t50014EE2061E4C7Ed r: 702, wr: 14, w: 0%, b: 79%
c0t50014EE6598D7D63d r: 1599, wr: 12, w: 0%, b: 60%
c0t50014EE2B0E01F06d r: 1288, wr: 13, w: 0%, b: 72%
c0t50014EE0585C2728d r: 1534, wr: 12, w: 0%, b: 66%
c0t50014EE0ADB0FC2Ed r: 481, wr: 12, w: 0%, b: 84%
c0t5000CCA269D4CEB3d r: 0, wr: 1862, w: 0%, b: 88%

I will try some casual tests with huge file transfers over network after.
 

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
AiO with Solaris 11.4 on ESXi 6.7
vmware-tools / open-vm-tools on 11.4b |Oracle Community

My findings/ "just a hack"

VMware vmtools for Solaris from ESXi 6.7
Executing on a textonly setup of S11.4 final on ESXi 6.7

Installer vmware-install-pl on 11.4 installs but fails with a message
Package "SUNWuiu8" not found when executing vmtool/bin/vmware-config-tools.pl

This can be skipped by editing vmtool/bin/vmware-config-tools.pl in line 13026
when you comment out the check for SUNWuiu8

When you then run vmtool\bin\vmware-config-tools.pl it hangs due a missing /usr/bin/isalist
I copied isalist over from a Solaris 11.3, made it executable and then vmware-config-tools.pl works

After a reboot I got the message vmools installed with a console message
Warning: Signature verification of module /kernel/drv/amd64/vmmemctl failed

same with verification of the vmxnet3s driver
vmxnet3s reports deprecated "misc/mac"

Not sure if this is critical

vmxnet3s and guest restart from ESXi works

Gea
 

Boris

Member
May 16, 2015
86
14
8
@gea, could you please help me.

My current rpool contain single 960Gb disk.
But only 47Gb used. I want to replace single 960Gb disk with 240Gb mirror.

Is it work, if i just:
1. add 240Gb disk to rpool mirror
2. remove 960Gb disk from rpool mirror
3. add another 240Gb disk to rpool for mirror

I mean if disk used only for 47Gb Solaris let me to do it, or i should do disk resize first or something like that?

Thank you in advance.
 

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
You cannot add a smaller mirror to a larger disk. A vdev shrink is also not possible.
If this is a standard setup, the fastest way is a clean reinstall and reconfigure then mirror the new rpool

On current napp-it 18.09dev I have added a new function/menu: System > Recovery
to make recovery easy. The idea behind is:

Full Appliance Disaster/System Recovery

To recover a fully configured appliance from a BE (bootenvironment):

1. backup current BE: create a replication job (require 18.09dev) with current BE as source

2. reinstall OS and napp-it

3. restore BE: create a replication job with the BE as source and rpool/ROOT as target (require 18.09dev)

4. activate the restored BE and reboot

This BE backup/restore can be done also manuall via replication and zfs send
 
Last edited:
  • Like
Reactions: Boris

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
I have done a clean reinstall.
An update may be possible but it can be that you indeed need the newest 11.3 SRU but even then I would prefer a clean reinstall.
 

nezach

Active Member
Oct 14, 2012
210
132
43
So i tried "pkg update" and it did not work. This is quite a bummer and makes Solaris even less attractive than before.