How do I migrate OpenIndiana Hipster installation from 512k, MBR, legacy boot 32 GB SSD to 4k, GPT, UEFI boot 128 GB SSD?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jdrch

New Member
Dec 2, 2019
7
0
1
Quad Cities
github.com
Foreword

I'm still very much learning OpenIndiana and don't have much deep knowledge about how its disk partitioning/setup goes. Ergo, if you're wondering whether I know something, it's best to assume I don't.

Background

I have OpenIndiana Hipster with napp-it Free installed on this PC (detailed specs at link.)

If you're wondering how I wound up with a 512k MBR legacy boot installation on an SSD, I had a really hard time installing OI due to lack of documentation and these steps were the only ones I could figure out that actually worked. I wanted a UEFI 4K installation, but couldn't find any way to achieve it (sector size was never presented as an option, anyway.)

Problem

The original 32 GB SSD on which I installed OpenIndiana has run out of space.

Goal

I'd like to move the installation to a larger 128 GB SSD using 4k sectors, GPT partitioning (hope I'm using the correct term in this context) and UEFI boot.

What I've done so far

My current OS zpool is rpool; rpool2 is a zpool I created on the destination 128 GB SSD:

Bash:
# zfs list
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool                                             27.1G  1.49G  33.5K  /rpool
rpool/ROOT                                        17.4G  1.49G    24K  legacy
rpool/ROOT/openindiana                            15.2M  1.49G  6.09G  /
rpool/ROOT/openindiana-2019:12:01                  970M  1.49G  7.32G  /
rpool/ROOT/openindiana-2019:12:02                 48.0M  1.49G  7.32G  /
rpool/ROOT/openindiana-2019:12:10                  813K  1.49G  8.34G  /
rpool/ROOT/openindiana-2020:01:14                 15.7M  1.49G  7.88G  /
rpool/ROOT/openindiana-2020:02:12                  858K  1.49G  7.82G  /
rpool/ROOT/openindiana-2020:02:27                  650K  1.49G  7.92G  /
rpool/ROOT/openindiana-2020:03:10                  656K  1.49G  8.23G  /
rpool/ROOT/openindiana-2020:03:26                 16.3G  1.49G  8.85G  /
rpool/ROOT/pre_activate_18.12_1575387063           239K  1.49G  7.31G  /
rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.49G  7.31G  /
rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.49G  7.29G  /
rpool/ROOT/pre_download_19.12.homeuse_1581739687     1K  1.49G  7.68G  /
rpool/ROOT/pre_napp-it-18.12                       273K  1.49G  6.63G  /
rpool/dump                                        3.95G  1.49G  3.95G  -
rpool/export                                       307M  1.49G    24K  /export
rpool/export/home                                  307M  1.49G    24K  /export/home
rpool/export/home/judah                            307M  1.49G   290M  /export/home/judah
rpool/swap                                        5.42G  5.69G  1.22G  -
rpool1                                            46.0G   403G  32.0K  /rpool1
rpool1/LANBackup                                  30.6K   403G  30.6K  /rpool1/LANBackup
rpool1/LocalBackup                                45.9G   403G  45.9G  /rpool1/LocalBackup
rpool2                                            3.78M   115G   124K  /rpool2
I created a recursive snapshot, rpool@upgrade, of rpool using:

Bash:
# zfs snapshot -r rpool@upgrade
Then I sent it to rpool2 using:

Bash:
# zfs send -R rpool@upgrade | zfs recv -F rpool2
What do I do next? I get the impression I'm supposed to use
Bash:
bootadm install-bootloader
or
Bash:
installboot
to set up the bootloader and then mark it (or something else?) as active?

Do I have that right? Can anyone advise?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
There are three independent issues

1. Uefi
Normally you just install the OS ex OI or OmniOS and use what is supported. OmniOS has a newer installer so if OI is not working, you may switch to OmniOS.

It may be possible to mix the OmniOS installer with an OI OS but this is not a supported option. (You may install OmniOS and send over the OpenIndiana bootenvironment).

Without special reason and deeper knowledge, just install the OS as is with its defaults.

2. Ashift
Current Illumos supports an ashift option during creation of a vdev. The OmniOS installer allows this as a setup option. In general this doeas not matter for rpool. Normally you use the ashift that the manufacturer of the SSD wants as default. Forcing ashift is mainly needed for datapools for different reasons.

Without special reason and deeper knowledge, just install the OS as is with its defaults.

3. Moving the current OS to the new disk
Easiest option is to replicate the current active (or wanted) bootenvironment (ex rpool/ROOT/openindiana-2020:03:26) to the datapool, do a default setup of your OS to the new bootdisk and replicate the BE back to rpool/ROOT. Then activate this BE and reboot into the former OS state. This may work even between OI and OmniOS but I have never tried.

Without special reason and deeper knowledge, just install the OS as is with its defaults and restore the last bootenvironment.

Basically you have to decide if you want to improve your IT knowledge or if you look at a "it just works" solution.
 

jdrch

New Member
Dec 2, 2019
7
0
1
Quad Cities
github.com
There are three independent issues

1. Uefi
Normally you just install the OS ex OI or OmniOS and use what is supported. OmniOS has a newer installer so if OI is not working, you may switch to OmniOS.
I need a DE and there lack of documentation on how to set that up on OmniOS isn't encouraging. As such, I'm hoping to stick with OI for now. I can live with legacy boot, I just wanted to take the opportunity to migrate to something more modern if possible. If it's not possible or the path is undocumented and unproven then I'll just stick with legacy boot.

It may be possible to mix the OmniOS installer with an OI OS but this is not a supported option. (You may install OmniOS and send over the OpenIndiana bootenvironment).
Thanks for the information. Doesn't sound worth the trouble.

Without special reason and deeper knowledge, just install the OS as is with its defaults.
Sounds reasonable.

3. Moving the current OS to the new disk
Currently the source 32 GB SSD is disconnected from the PC as I troubleshoot the target 128 GB SSD.

Do you think the following would work:

  1. Fresh install of OI on new SSD
  2. Boot into OI on new SSD
  3. Connect old SSD to PC
  4. Import old SSD rpool as rpool2:
    Code:
    # zpool import -f rpool rpool2
  5. Use
    Code:
    zfs send
    to send pre-existing recursive snapshot rpool2@upgrade of rpool2 to rpool:
    Code:
    # zfs send -R rpool2@upgrade | zfs recv -F rpool
  6. Reboot
Does that sound reasonable?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Currently the source 32 GB SSD is disconnected from the PC as I troubleshoot the target 128 GB SSD.

Do you think the following would work:

  1. Fresh install of OI on new SSD
  2. Boot into OI on new SSD
  3. Connect old SSD to PC
  4. Import old SSD rpool as rpool2:
    Code:
    # zpool import -f rpool rpool2
  5. Use
    Code:
    zfs send
    to send pre-existing recursive snapshot rpool2@upgrade of rpool2 to rpool:
    Code:
    # zfs send -R rpool2@upgrade | zfs recv -F rpool
  6. Reboot
Does that sound reasonable?
No.
You cannot use an active/running OS as source or target of a copy or replication due open files. Also if you replicate a snap of rpool2 to rpool you end with rpool/rpool2. This is why you need to replicate a snap of a bootenvironment and not rpool itself and activate it then for next reboot to make it active. (In the end the bootenvironment is the content of rpool without ROOT, dump, swap and export)