solaris zfs partition question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dragonme

Active Member
Apr 12, 2016
282
25
28
So I had my napp-it zfs boot vmfs datastore die on me the other day and since I need to remedy this .. I decided to clean up this host and make some hardware changes.

part of that was changing the underlying hardware structure of a data pool from RDM passthroughs to napp-it VM to passing through the actual SATA controller to napp-it...

before the change, napp-it saw the 3 DRM disk pool as 'whole disks'

after passing the SATA controller, Napp-it now sees one disk as attached to a partition, not the whole disk
and thows a warning that that disk has a block structure larger than the pool -- not ideal

here is a difference in partition structures of the 2 disks

one of the original 2 that were possibly built outside of napp-it -- there are WD 8tb drives

partition> print

Current partition table (original):

Total disk sectors available: 15628036717 + 16384 (reserved sectors)


Part Tag Flag First Sector Size Last Sector

0 usr wm 2048 7.28TB 15628036095

1 unassigned wm 0 0 0

2 unassigned wm 0 0 0

3 unassigned wm 0 0 0

4 unassigned wm 0 0 0

5 unassigned wm 0 0 0

6 unassigned wm 0 0 0

8 reserved wm 15628036096 8.00MB 15628052479


this is the borked drive

partition> print

Current partition table (original):

Total disk sectors available: 15628036717 + 16384 (reserved sectors)


Part Tag Flag First Sector Size Last Sector

0 system wm 256 0.88MB 2047

1 usr wm 2048 7.28TB 15628036750

2 unassigned wm 0 0 0

3 unassigned wm 0 0 0

4 unassigned wm 0 0 0

5 unassigned wm 0 0 0

6 unassigned wm 0 0 0

8 reserved wm 15628036751 8.00MB 15628053134


I assume the only way out of this is to destroy the pool, rebuild the pool and restore from the backup array (3x5 disk raid-z1) which was done by filesystem and not recursive to the pool.? no way to use fdisk or format or any other tools to fix the partition layout right?

questions

1 how can I ensure napp-it recreates the pools forcing the disks to use dev and not partition .. i.e. re-format the offending disk

2 best way to restore from the backup pool to ensure:
a: can I do a send of the entire backup pool recursive one time but keep all the snapshots/replication to allow making the new pool look identical after the restore to resume napp-it incremental backup jobs?

so the actual commands or napp-it sequence using the gui to move the pool from backup to new pool
promote that restore to be the active pool with the old name
and doing so to keep all the existing replicate snapshots that napp-it did doing file system backups


thanks!!!
 

gea

Well-Known Member
Dec 31, 2010
3,175
1,198
113
DE
Solarish always use and wants whole disks unless you partition a disk manually in advance. In this case Solaris use the whole partition. This is different to other systems like FreeNAS that always wants an additional swap partition. No action needed in Solarish. Just insert a disk and add to a pool. If a disk has partitions (for whatever reasons ex from another OS) delete them prior use unless you really want this (ex to use one disk for several slog or L2Arc)

Without manually partitioning a disk, the problem must be related to your setup like your HBA setup.

Basically its unclear whether you use pass-through (in this case ESXi has no control of an HBA) and OmniOS use it with its own drivers, This is the "golden method" or whether you use RDM. Prefer (or better say use only) LSI/BroadCom HBAs with 2008, 2308 or 3008 chipsets, either from BroadCom or the corresponding models from Dell, HP or IBM. You can use IR firmware, best is IT firmware.

RDM is a supported option with an (LSI) SAS HBA (not in pass-through mode, controlled by ESXi). This allows to add single disks to a VM in VM properties with Add Disk > Raw disk. RDM can also be used for Sata disks (Sata not in pass-through mode). This is not supported, can work or not. I would always avoid. More likely to have problems than success especially with such old hardware.

To transfer a filesystem from one pool to another, use ZFS replication. Create a replication job with desired source and target. Local transfers started manually is a napp-it free option. Network transfers require Pro or a free replication script. Replication use its own snap versioning on the target server. This is independent from autosnaps on the source system. Partitioning or ashift (pool optimized for 512b or 4k disks) can only be modified when you re-create a pool.