Solaris 11.1 - basic commands, zfs mandatory create parameters, LSI HBA hot-swap

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tim

Member
Nov 7, 2012
105
6
18
So, I'm new to Unix/Solaris.
But with 16yrs in Linux/Slackware, so not new to the shell way of doing things.

This is mostly a test to try Solaris, learning by curiosity and reading the online docs.

Also I'm new with ZFS, just tried it on FreeBSD 9 to get going and it worked well for my needs.
But Solaris seems to be the better way, and so far I like it a lot.

My setup is Solaris 11.1 (x86) as a vm on ESXi 5.1 with passthrough of a LSISAS9211-8i HBA in IT firmware mode.

The goal is to run this as my NAS, available through NFSv4 for my desktops and mediacenters.
I'll keep it simple. The media is not critical so no backup so far and my zpools consists of single disks.
This will expand with zpools spanning multiple disk as the library grows.
I've got 1 disk with documents/pictures and that's the only disk I can't afford to loose.
My plan so far has been regular backups to an external disk.
My new plan is to use nightly crontab entry to run zfs incremental snapshots of this disk to another zfs disk.

First some basic questions about the Solairs tools.
prtconf, prtdiag, sysdef, modinfo, format, zfs/zpool/zdb, lshal, prtvtoc, vmstat, fmadm, svcs/svcadm, netadm/dladm/ipadm.
Those are the ones I've used so far, any other important tools I should know about as soon as possible?
(top and dmesg and logs I know of)

How do you check/monitor the SMART info from the disks?
I tried smartmontools but it wasn't able to reach my disks. (the same result as in Linux)
This is caused by the fact that I'm running under ESXi I think, and using vt-d/VMdirectPath is not helping.
So how do you know when a disks is about to fail? (is that the reason everyone is using raidz, since they don't know ahead of hardware failure?)
Is there a plugin for ESXi to monitor this stuff? Even if the HBA is in passthrough mode?

What are the best practice for setting up a new zpool with a zfs on a disk?
This is what I did in FreeBSD and what I did in Solaris, but it might not be the right way?
Any missing mandatory parameters? Or just the optional ones I need to enable NFS shares etc.
Code:
zpool create tank3 c0t5000C5005335A6C9d0
zfs create tank3/doc
Next questions are about the LSI HBA under Solaris.

modinfo gives:
mpt (MPT HBA Driver)
mpt_sas (MPTSAS HBA Driver 00.00.00.26)

And dmesg gives: (yes, I've got 4 disks connected)
Code:
Dec 30 22:04:33 solaris scsi: [ID 583861 kern.info] sd2 at scsi_vhci0: unit-address g5000c5004a236358: f_sym
Dec 30 22:04:33 solaris genunix: [ID 936769 kern.info] sd2 is /scsi_vhci/disk@g5000c5004a236358
Dec 30 22:04:33 solaris genunix: [ID 408114 kern.info] /scsi_vhci/disk@g5000c5004a236358 (sd2) online
Dec 30 22:04:33 solaris genunix: [ID 483743 kern.info] /scsi_vhci/disk@g5000c5004a236358 (sd2) multipath status: degraded: path 1 mpt_sas1/disk@w5000c5004a236358,0 is online
Dec 30 22:04:33 solaris scsi: [ID 583861 kern.info] mpt_sas2 at mpt_sas0: scsi-iport 40
Dec 30 22:04:33 solaris genunix: [ID 936769 kern.info] mpt_sas2 is /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@40
Dec 30 22:04:33 solaris genunix: [ID 408114 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@40 (mpt_sas2) online

Dec 30 22:04:34 solaris scsi: [ID 583861 kern.info] sd3 at scsi_vhci0: unit-address g5000c5004a1a2720: f_sym
Dec 30 22:04:34 solaris genunix: [ID 936769 kern.info] sd3 is /scsi_vhci/disk@g5000c5004a1a2720
Dec 30 22:04:34 solaris genunix: [ID 408114 kern.info] /scsi_vhci/disk@g5000c5004a1a2720 (sd3) online
Dec 30 22:04:34 solaris genunix: [ID 483743 kern.info] /scsi_vhci/disk@g5000c5004a1a2720 (sd3) multipath status: degraded: path 2 mpt_sas2/disk@w5000c5004a1a2720,0 is online
Dec 30 22:04:34 solaris scsi: [ID 583861 kern.info] mpt_sas3 at mpt_sas0: scsi-iport 20
Dec 30 22:04:34 solaris genunix: [ID 936769 kern.info] mpt_sas3 is /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@20
Dec 30 22:04:34 solaris genunix: [ID 408114 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@20 (mpt_sas3) online

Dec 30 22:04:34 solaris scsi: [ID 583861 kern.info] sd4 at scsi_vhci0: unit-address g5000c5005336e4a1: f_sym
Dec 30 22:04:34 solaris genunix: [ID 936769 kern.info] sd4 is /scsi_vhci/disk@g5000c5005336e4a1
Dec 30 22:04:34 solaris genunix: [ID 408114 kern.info] /scsi_vhci/disk@g5000c5005336e4a1 (sd4) online
Dec 30 22:04:34 solaris genunix: [ID 483743 kern.info] /scsi_vhci/disk@g5000c5005336e4a1 (sd4) multipath status: degraded: path 3 mpt_sas3/disk@w5000c5005336e4a1,0 is online
Dec 30 22:04:34 solaris scsi: [ID 583861 kern.info] mpt_sas4 at mpt_sas0: scsi-iport 10
Dec 30 22:04:34 solaris genunix: [ID 936769 kern.info] mpt_sas4 is /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@10
Dec 30 22:04:34 solaris genunix: [ID 408114 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@10 (mpt_sas4) online

Dec 30 22:04:34 solaris scsi: [ID 583861 kern.info] sd5 at scsi_vhci0: unit-address g5000c5005335a6c9: f_sym
Dec 30 22:04:34 solaris genunix: [ID 936769 kern.info] sd5 is /scsi_vhci/disk@g5000c5005335a6c9
Dec 30 22:04:35 solaris genunix: [ID 408114 kern.info] /scsi_vhci/disk@g5000c5005335a6c9 (sd5) online
Dec 30 22:04:35 solaris genunix: [ID 483743 kern.info] /scsi_vhci/disk@g5000c5005335a6c9 (sd5) multipath status: degraded: path 4 mpt_sas4/disk@w5000c5005335a6c9,0 is online
Dec 30 22:04:35 solaris scsi: [ID 583861 kern.info] mpt_sas5 at mpt_sas0: scsi-iport v0
Dec 30 22:04:35 solaris genunix: [ID 936769 kern.info] mpt_sas5 is /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@v0
Dec 30 22:04:35 solaris genunix: [ID 408114 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3020@0/iport@v0 (mpt_sas5) online
Can I address these with the "sd5" instead of c0t5000C5005335A6C9d0 when using zpool create?
For now it seems that the sd labels are not used other than in the logs?
I just find it strange that in FreeBSD I got a shorter device name and in Solaris I got this long "t" number (t5...6C9).

And out of the blue I find this in the dmsg log.
Not related to any command I've given manually in the shell.
Code:
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:44 solaris         Error for Command: mode_sense              Error Level: Informational
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   ASC: 0x20 (invalid command operation code), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:44 solaris         Error for Command: mode_sense(10)          Error Level: Informational
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:44 solaris scsi: [ID 107833 kern.notice]   ASC: 0x24 (invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense              Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x20 (invalid command operation code), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense(10)          Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x24 (invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense              Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x20 (invalid command operation code), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense(10)          Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x24 (invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense              Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x20 (invalid command operation code), ASCQ: 0x0, FRU: 0x0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci-ide@7,1/ide@1/sd@0,0 (sd1):
Jan  2 01:45:45 solaris         Error for Command: mode_sense(10)          Error Level: Informational
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Requested Block: 0                         Error Block: 0
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Vendor: NECVMWar                           Serial Number:
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   Sense Key: Illegal_Request
Jan  2 01:45:45 solaris scsi: [ID 107833 kern.notice]   ASC: 0x24 (invalid field in cdb), ASCQ: 0x0, FRU: 0x0
This is all for sd1, as you can see from the dmesg output there is no drives connected to the first port on the backplane.
The reason that there's no disk in that bay is that I've moved it into the chassis as the ESXi boot/vm-storage disk.
And rearranged the disks in the other bays, now only using the LSI HBA with mini SFF-8087 x4 cables.

Should I rearrange the disks again so that I start with slot 1 or is this error message nothing to worry about?
I did not see this message in FreeBSD so I'm not sure if this is a Solaris driver thing (for worse or better).

Last qustions is regarding hot-swappable drive bays in Solaris.
The Chassis backplane is supposed to support it (Norco 2212) and the LSI HBA is supposed to support it.
Not sure if it's a configure thing in ESXi or Solaris, but lshal gives me this:
Code:
udi = '/org/freedesktop/Hal/devices/scsi_vhci_0/disk5/sd5'
  block.device = '/dev/dsk/c0t5000C5005335A6C9d0'  (string)
  storage.drive_type = 'disk'  (string)
  storage.removable = false  (bool)
  storage.hotpluggable = false  (bool)
The two "false" values there worries me.
Any settings in Solaris to change this?

Thanks for reading this long post and for any feedback on any of the topics.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
So, I'm new to Unix/Solaris.
But with 16yrs in Linux/Slackware, so not new to the shell way of doing things.
The first thing you will find with Solaris is that it is a complete OS build from one enterprise (Sun) around
- mainly ZFS
- Comstar (block based storeage like iSCSI)
- the Solaris CIFS server (the most Windows compatible SMB server outside M$)
- Dtrace
- Crossbow (virtual networking)
- SMF service management

the next are the disk numbering
not like sd1..n (what the hell is sd48) but a numbering like partition 3 on disk 2 on controller 3 (c3t2d0p3) on older SAS1 controllers.
Much better (if you use large Raid) are the wordswide unique WWN numbers (like a Mac Nic nr.) like c0t5000C5005335A6C9d0 on LSI SAS2 controllers

You will never miss them on large Arrays.
For a pure NAS/SAN Solaris is "best on Earth"

for the rest: look at Oracle docs
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
I love the Solaris experience so far, just a good feeling with it and the tools are great and well documented with very good output, not just some cryptic message.
And the online doc is great.

Disk numbering, I agree that c3t2d0p3 is much better than "sd48", but a shorthand is nice to have if possible for a small system like mine.
I don't mind using the WWN (you just taught me something new there) numbers, I was just not aware of them and found them a bit verbose on the "t" number.
Anyway, not that often I'll use them. Just install, configure once and use/monitor the system. So I guess there's no reason to mislike them.

Good to know it's the "best on Earth".

And yes, I'm living online at the moment with the oracle docs, best docs I've seen.
I guess the hot-swapp thing is configurable some place then.