ZFS not showing the correct amount?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
What storage platform is that? Looks like from 3rd image that you are using four 1.8T disks in stripped mirror raid-10 so I'd imagine it should show up as roughly 3.4-3.6T avail, that top pool level or is there another dataset/volume eating into the pool space and your looking only at mounted vmdata dataset/volume availability?
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the replies, as i have the OS proxmox with 2 disk of 500 gigs RAID 1 and 4 disks with RAID 10 2TB each. This is my list

Code:
root@prometheus:~# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 14.9G   435G    96K  /rpool
rpool/ROOT            6.35G   435G    96K  /rpool/ROOT
rpool/ROOT/pve-1      6.35G   435G  6.35G  /
rpool/data              96K   435G    96K  /rpool/data
rpool/swap            8.50G   437G  5.79G  -
vmdata                 618G  1.59T    96K  /vmdata
vmdata/vm-100-disk-1   713M  1.59T   713M  -
vmdata/vm-101-disk-1  24.5G  1.59T  24.5G  -
vmdata/vm-101-disk-2   498G  1.59T   498G  -
vmdata/vm-102-disk-3  42.5G  1.59T  42.5G  -
vmdata/vm-102-disk-4  5.66G  1.59T  5.66G  -
vmdata/vm-103-disk-1  14.6G  1.59T  14.6G  -
vmdata/vm-104-disk-1  16.2G  1.59T  16.2G  -
vmdata/vm-105-disk-1  16.1G  1.59T  16.1G  -
Thank you
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the reply by default isnt it always off?

Code:
root@prometheus:~# zpool get all | grep autoreplace
rpool   autoreplace                 off                         default
vmdata  autoreplace                 off                         default
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
zpool status should show some details about the exact topology.
maybe by accident you setup something else than a stripe over two mirrors.

edit: Sorry, missed the links to the screenshots
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the reply, how can i check what exactly i setup? i could swear i did setup correctly as i ran this
Code:
zpool create -f -o ashift=12 vmdata mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
Then i export the pool to import it again with disk by id

after that one of disks died and i did resilver maybe somewhere along the lines it broke?

Thank you
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the reply, no worries i should of put the code rather then the pic, but if the zpool status is fine howcome it seems to only show as if there was 3 disks total amount when i should be getting around 3.2tb?



Code:
  pool: vmdata
 state: ONLINE
  scan: resilvered 7.68G in 0h22m with 0 errors on Sun Jan  7 12:03:31 2018
config:

        NAME                               STATE     READ WRITE CKSUM
        vmdata                             ONLINE       0     0     0
          mirror-0                         ONLINE       0     0     0
            ata-TOSHIBA_HDWD120_675JZ8LAS  ONLINE       0     0     0
            ata-TOSHIBA_HDWD120_672RYH6AS  ONLINE       0     0     0
          mirror-1                         ONLINE       0     0     0
            ata-TOSHIBA_HDWD120_672RW9ZAS  ONLINE       0     0     0
            ata-TOSHIBA_HDWD120_672SBE7AS  ONLINE       0     0     0
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the reply, as i see an something with expand? 1.36tb?

Code:
root@prometheus:~# zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool    464G  12.2G   452G         -     2%     2%  1.00x  ONLINE  -
vmdata  2.27T   621G  1.66T     1.36T    14%    26%  1.00x  ONLINE  -
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
Thanks for the reply so it seems i need to turn on auto expand and it seems that i have some expandable size


Code:
root@prometheus:~# zpool get expandsize vmdata
NAME    PROPERTY    VALUE     SOURCE
vmdata  expandsize  1.36T     -
the part im somewhat confused is where he says on the tutorial

To make it use that space you need do zpool online -e for all replaced devices: #zpool online -e storage wwn-0x5000c500654c1adc
#zpool online -e storage wwn-0x5000c500652efbc2
Would i have to do this on all the disk which are on the pool of vmdata? something like this

First

Code:
zpool set autoexpand=on vmdata
Then

Code:
zpool online -e storage ata-TOSHIBA_HDWD120_675JZ8LAS
zpool online -e storage ata-TOSHIBA_HDWD120_675JZ86AS
zpool online -e storage ata-TOSHIBA_HDWD120_675JZ8ZAS
zpool online -e storage ata-TOSHIBA_HDWD120_675JZ87AS
do i need to reboot after? do i have to turn off the vms?

Thank you
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I'm still scratching my head wondering how you got in this situation to begin with. In 10+ years of ZFS use I've never managed to get this far down the rabbit hole. :-D

GL w/ gettin' on, we're rooting for you!
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
think replacing smaller hdd with larger ones when autoexpand=off can do this.

@Albert Yang you should be fine doing this online. Reboot will not be necessary and running vm's imho shouldn't be affected besides maybe in terms of performance.
But just for the case i'd suggest either moving vm's to another storage/pool and/or make a good backup.
 

Albert Yang

Member
Oct 26, 2017
72
1
8
30
@whitey well this happened because originally i installed proxmox on the 1.8tb to see why the zfs raid 1 was not working until finally figured it out it was the NIC the issue but it was too late and did not realize i mixed up the 500 gigs with the 1.8tb it was a disaster so moving them up and down until i got it right. then as what @_alex said hes right, so running these steps from above should be good?

Thank you