Solaris 11.1 - zpool import failed with unavail, insufficient replicas, cannot open

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tim

Member
Nov 7, 2012
105
6
18
Hi

Need some help to figure out what my options are.

I had FreeBSD 9 installed under vmware ESXi 5.1 and a zpool "tank1" created and filled with data. (only 1 HDD).
Verified that I could export and import the zpool without problems.

Now, I'm running Solaris 11.1 (fresh install on vmware ESXi 5.1) and running "zpool import" gives me errors.
This is the output.

Code:
root@solaris:~# zpool import
  pool: tank1
    id: 10811497011987668786
 state: UNAVAIL
status: One or more devices are unavailable.
action: The pool cannot be imported due to unavailable devices or data.
config:

        tank1                      UNAVAIL  insufficient replicas
          c0t5000C5004A236358d0p0  UNAVAIL  cannot open

device details:

        c0t5000C5004A236358d0p0  UNAVAIL          cannot open
        status: ZFS detected errors on this device.
                The device was missing.



root@solaris:~# zpool import tank1
cannot import 'tank1': invalid vdev configuration
The zpool "tank1" is one SATA HDD on my LSISAS9211-8i which is in passthrough mode.

Just to verify that the system can see my HDD's.

Code:
root@solaris:~# format
Searching for disks...done

c0t5000C5004A1A2720d0: configured with capacity of 2794.52GB
c0t5000C5004A236358d0: configured with capacity of 2794.52GB


AVAILABLE DISK SELECTIONS:
       0. c0t5000C5004A1A2720d0 <ATA-ST3000DM001-9YN1-CC4C-2.73TB>
          /scsi_vhci/disk@g5000c5004a1a2720
       1. c0t5000C5004A236358d0 <ATA-ST3000DM001-9YN1-CC4C-2.73TB>
          /scsi_vhci/disk@g5000c5004a236358
       2. c0t5000C5005335A6C9d0 <ATA-ST2000DM001-9YN1-CC4B cyl 59614 alt 2 hd 256 sec 256>
          /scsi_vhci/disk@g5000c5005335a6c9
       3. c0t5000C5005336E4A1d0 <ATA-ST2000DM001-9YN1-CC4B cyl 59614 alt 2 hd 256 sec 256>
          /scsi_vhci/disk@g5000c5005336e4a1
       4. c8t0d0 <VMware-Virtual disk-1.0-20.00GB>
          /pci@0,0/pci15ad,1976@10/sd@0,0
As you can see, the zpool "tank1" is on disk 1 (disk 0 is another zpool, and disk 2 and 3 is not in use yet and disk 4 is the 20GB vm that solaris is running from)


So, what are my options to get this (and my other zpools with the same error) back online?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Hi

Need some help to figure out what my options are.

I had FreeBSD 9 installed under vmware ESXi 5.1 and a zpool "tank1" created and filled with data. (only 1 HDD).
Verified that I could export and import the zpool without problems.

Now, I'm running Solaris 11.1 (fresh install on vmware ESXi 5.1) and running "zpool import" gives me errors.
This is the output.
Problem:
If the disks were formatted in FreeBSD with GPT partitions (which FreeBSD recognizes but Solaris doesn't),
you cannot import to Solaris.

Only pools with disks formatted with GEOM can be exported from FreeBSD/FreeNAS/ZFSGuru and reimported into Solaris
 

Tim

Member
Nov 7, 2012
105
6
18
As far as I know the disks were not formatted by FreeBSD at all.
After booting up FreeBSD 9 the only commands given to the disks were:
Code:
zpool create tank1 /dev/da1
zfs create tank1/media
Then I shared it via NFSv4 from FreeBSD and started to copy media to/from it.

Only whole disks are used, not partitions.

Also:
New Oracle Solaris installations are no longer limited to the first 2 TiB of the disk on x86 platforms. Oracle Solaris now uses EFI (GPT) partitioning for new installations to enable all of the disk space on the boot device to be used.
Source: EFI (GPT) support in Solaris 11.1

I also think the problem might be that the disks is not in the default path, according to this info from the doc.
By default, the zpool import command only searches devices within the /dev/dsk directory.
Source: zpool import from alternate directories

The disks are in /devices/scsi_vhci/ (with symlinks in /dev/dsk/, made by the system)

Since the zpool import command fails with "invalid vdev configuration" I guess this path is part of the problem (if I'm right on the GPT part).
And that the solution is to assign an EFI disk label.
To use a whole disk, the disk must be named by using the /dev/dsk/cNtNdN naming convention.
Some third-party drivers use a different naming convention or place disks in a location other than the /dev/dsk directory.
To use these disks, you must manually label the disk and provide a slice to ZFS.
Source: Valid naming conventions

But as far as I understand, the disks already has EFI labels, it's the c0t5....d0 parts in the format output in my first post.
And I was able to create a new zpool with a zfs on it (one of the non-used disks) by using "zpool create tank3 c0t5000C5005335A6C9d0" and "zfs create tank3/doc"

BTW: Why does the disks on the LSISAS9211-8i that is in passthrough mode have these long "t" numbers? Is it the right way? Or something wrong?

So how can I import the "tank1" zpool by addressing the EFI label instead of the zpool name?
Trying with -d options fails too.
Code:
zpool import -d /devices/scsi_vhci/disk@g5000c5004a236358 tank1
cannot import 'tank1': no such pool available
Hope this makes sense.
I'm not used to Solaris yet, still learning but find it much better then FreeBSD for ZFS usage and it's well documented so I'm reading a lot these days while waiting for the new hardware.

BTW: yes, I did zpool export tank1 (and the other zpools) in FreeBSD 9 before I tried to import them in Solaris 11.1
It should take care of the vdev issue when the disks moves from FreeBSD to Solaris' naming conventions.

Also, using zdb I can address the c0t5...6C9d0 under /dev/dsk/c0t5...6C9d0 and lists its info, but when I try the same for the tank1 disk (/dev/dsk/c0t5...358d0) I get:
"cannot open '/dev/rdsk/c0t5000C5004A236358d0': I/O error" (this also goes for the other zpool from FreeBSD on disk c0t5...720d0)

Another detail, the zpool import looks for the zpool on c0t5...358d0p0 - why the p0 at the end? The pool was created on the whole disk, not on a slice/partition.
But I guess p0 is the whole disk? And I can't find a way to force it to look for it someplace else.

Hm, might be something I/O related with that disk as well? Or is that a false error?
They're working fine in FreeBSD 9.

Just some final thoughts.
If this is no I/O error (don't know why solaris thinks it is) and there's no way of importing these zpools from FreeBSD 9 to Solaris 11.1.
One solutions seems to be to EFI label a disk in Solaris, create a zpool on it in FreeBSD and copy over the data, export it and import it in Solaris.
I don't have enough hardware to do this without multiple operations and the same goes for a zfs send/receive. So really not an option.
And don't know why but, "prtvtoc /dev/dsk/c0t5000C5004A236358d0" (the zpool "tank1" disk) returns only: "Invalid VTOC" (the same goes for the other FreeBSD zpool disk too).

So, to sum it up - is this a "Solaris can't import a FreeBSD zpool at all" problem, is it documentet somewhere that it's not possible?
Or is there another workaround given all this info I've gathered above?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
As I said,
Only if you use GEOM on BSD you can import in Solaris.
You can also create the pool in Solaris to have it universal - BSD, Linux and Solaris can import
 

Tim

Member
Nov 7, 2012
105
6
18
It's just that I'm not able to find any documentation on this and you don't provide any either.
It's not that I don't trust you, I just like to have things documented with sources with why things are the way they are.
And reading up on the solaris doc, all I can find is evidence of Solaris supporting this (but with some trouble with disks over 2TB and the two disks are 3TB, hence the EFI labeling).

Well, I'll revert back to FreeBSD to extract my data, get a couple new 3TB disks to migrate the data and try the Solaris route again.

Thanks for the help.
You saved my data, I might have tried something that would've destroyed them if you hadn't told me that it just ain't possible.