I added an old laptop as a second node to my existing ProxMox homelab. The goal was to experiment and learn and I wanted to be able to take advantage of live migration. I can't migrate VMs or transfer via backups at all and I'm not sure why. Looking for some help.
The first node already had ProxMox installed with a handful of VMs. I installed ProxMox on the second node, created a cluster on the first node and added the second node to the cluster. This was easy.
On the first node, there is a single root drive (local). There is also a ZFS VM pool with 2 SSDs in a RAID 0 configuration (r0ssd400gb).
On the second node, there is a ZFS pool with 2 SSDs in RAID 0 (local). I've also configured a ZFS directory as r0ssd500gb.
I would like the local storage (root) on Node 1 to be used only for the OS and backups, tempaltes, etc. I would like the SSD arrays on Nodes 1 and 2 to be used only for VMs. In some cases it seems that I transfer VMs from the SSD array on Node 2 to the root drive on Node 1. I can also transfer VMs created on Node 2 to Node 1. I cannot transfer VMs created on Node 1 to Node 2. I've also noticed that r0ssd400gb is listed as not active on the second node.
LXC From Node 1 to Node 2:
Here are some photos: Imgur: The most awesome images on the Internet
The first node already had ProxMox installed with a handful of VMs. I installed ProxMox on the second node, created a cluster on the first node and added the second node to the cluster. This was easy.
On the first node, there is a single root drive (local). There is also a ZFS VM pool with 2 SSDs in a RAID 0 configuration (r0ssd400gb).
On the second node, there is a ZFS pool with 2 SSDs in RAID 0 (local). I've also configured a ZFS directory as r0ssd500gb.
I would like the local storage (root) on Node 1 to be used only for the OS and backups, tempaltes, etc. I would like the SSD arrays on Nodes 1 and 2 to be used only for VMs. In some cases it seems that I transfer VMs from the SSD array on Node 2 to the root drive on Node 1. I can also transfer VMs created on Node 2 to Node 1. I cannot transfer VMs created on Node 1 to Node 2. I've also noticed that r0ssd400gb is listed as not active on the second node.
LXC From Node 1 to Node 2:
Code:
Aug 16 21:47:16 starting migration of CT 252 to node 'pve2' (10.0.1.15)
Aug 16 21:47:16 found local volume 'r0ssd400gb-zfs:subvol-252-disk-1' (in current VM config)
send from @ to r0ssd400gb/subvol-252-disk-1@__migration__ estimated size is 1.07G
total estimated size is 1.07G
TIME SENT SNAPSHOT
cannot open 'r0ssd400gb/subvol-252-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
warning: cannot send 'r0ssd400gb/subvol-252-disk-1@__migration__': Broken pipe
Aug 16 21:47:16 ERROR: command 'set -o pipefail && zfs send -Rpv r0ssd400gb/subvol-252-disk-1@__migration__ | ssh root@10.0.1.15 zfs recv r0ssd400gb/subvol-252-disk-1' failed: exit code 1
Aug 16 21:47:16 aborting phase 1 - cleanup resources
Aug 16 21:47:16 ERROR: found stale volume copy 'r0ssd400gb-zfs:subvol-252-disk-1' on node 'pve2'
Aug 16 21:47:16 start final cleanup
Aug 16 21:47:16 ERROR: migration aborted (duration 00:00:00): command 'set -o pipefail && zfs send -Rpv r0ssd400gb/subvol-252-disk-1@__migration__ | ssh root@10.0.1.15 zfs recv r0ssd400gb/subvol-252-disk-1' failed: exit code 1
TASK ERROR: migration aborted
Here are some photos: Imgur: The most awesome images on the Internet