we need to have a subforum just for PVE discussion.... I tried their community forum... and... and... and...From ZoL 0.6.5 release notes:
Makes you want 0.6.5 although they are now on 0.6.5.3.
we need to have a subforum just for PVE discussion.... I tried their community forum... and... and... and...From ZoL 0.6.5 release notes:
Makes you want 0.6.5 although they are now on 0.6.5.3.
Proxmox 4 has 0.6.5From ZoL 0.6.5 release notes:
Makes you want 0.6.5 although they are now on 0.6.5.3.
root@proxmox2:~# dpkg -l | grep zfs
ii libzfs2 0.6.5-pve4~jessie amd64 Native ZFS filesystem library for Linux
ii zfs-doc 0.6.3-3~wheezy amd64 Native OpenZFS filesystem documentation and examples.
ii zfs-initramfs 0.6.5-pve4~jessie amd64 Native ZFS root filesystem capabilities for Linux
ii zfsutils 0.6.5-pve4~jessie amd64 command-line tools to manage ZFS filesystems
Thanks, here's the output.can you run " blkid |grep zfs_member"
libblkid is already enabled/used by default on 0.6.4 release
if you need by-id ... you have to recompile and disable libblkid
and proxmox 4 is zol 0.6.4 level
root@proxmox2:~# blkid |grep zfs_member
/dev/sda1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="5670058742997786451" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="368ca7f3-b1d6-4544-95f7-f7bc3ac2cdb1"
/dev/sdb1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="9370770679080368229" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="3b03f728-898d-6046-af41-32bf5135cea9"
/dev/sdc1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="9797235632533975055" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="0686c1e1-e8cf-7646-802c-912c43967566"
/dev/sdd1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="4048307120490512505" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="006e90d4-7e62-11e5-8bcf-feff819cdc9f"
/dev/sde1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="15222255864448643" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="15c33c6e-7e62-11e5-8bcf-feff819cdc9f"
looks goodThanks, here's the output.
and, as stated above, my server is on zol 0.6.5, so that's a good thingCode:root@proxmox2:~# blkid |grep zfs_member /dev/sda1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="5670058742997786451" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="368ca7f3-b1d6-4544-95f7-f7bc3ac2cdb1" /dev/sdb1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="9370770679080368229" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="3b03f728-898d-6046-af41-32bf5135cea9" /dev/sdc1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="9797235632533975055" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="0686c1e1-e8cf-7646-802c-912c43967566" /dev/sdd1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="4048307120490512505" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="006e90d4-7e62-11e5-8bcf-feff819cdc9f" /dev/sde1: LABEL="datastore" UUID="9346498843565422045" UUID_SUB="15222255864448643" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="15c33c6e-7e62-11e5-8bcf-feff819cdc9f"
Sounds good. It's been working great since I setup the pool in 3.4, it's just always been strange to me that it wasn't retaining the by-id's after reboots.looks good
you have to like ZoL to us libblkid
and let libblkid to handle sdX labeling.
yap, no more by-id, due on ZoL greater than 0.6.3 use libblkid that handling persistent sdX.I did a fresh install and my pool does not have /dev/disk/by-id ( a bit annoying)Code:root@pve:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdc2 ONLINE 0 0 0 sdd2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 logs sda ONLINE 0 0 0 cache sdb ONLINE 0 0 0 errors: No known data errors Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 3C5BDD2B-A15D-41AC-AFD0-CE937D140FA3 Device Start End Sectors Size Type /dev/sdd1 34 2047 2014 1007K BIOS boot /dev/sdd2 2048 7814020749 7814018702 3.7T Solaris /usr & Apple ZFS /dev/sdd9 7814020750 7814037134 16385 8M Solaris reserved 1 Partition 2 does not start on physical sector boundary. Partition 10 does not start on physical sector boundary.
However, the installer did not use the who disk on sdc sdd and partition space for grub_bios.... I have no idea /dev/sdd9 is use for...
and the PVE 4.0 uses 0.6.5.3 package... I guess it is the installer script did not import the pool by disk-id
blame to ZoL due on using libblkid hahahhaSounds good. It's been working great since I setup the pool in 3.4, it's just always been strange to me that it wasn't retaining the by-id's after reboots.
check on ZoL mailinglist (or forum)....we need to have a subforum just for PVE discussion.... I tried their community forum... and... and... and...
I was thinking, maybe making a hyperconverged forum. OpenStack, Proxmox, Nutanix and others... thoughts?we need to have a subforum just for PVE discussion.... I tried their community forum... and... and... and...
I like that ideaI was thinking, maybe making a hyperconverged forum. OpenStack, Proxmox, Nutanix and others... thoughts?
if you want, just create a new sub forum for un*x flavorI was thinking, maybe making a hyperconverged forum. OpenStack, Proxmox, Nutanix and others... thoughts?