ZFS not showing the correct amount?

Discussion in 'Linux Admins, Storage and Virtualization' started by Albert Yang, Jan 9, 2018.

  1. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    #1
  2. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,684
    Likes Received:
    821
    What storage platform is that? Looks like from 3rd image that you are using four 1.8T disks in stripped mirror raid-10 so I'd imagine it should show up as roughly 3.4-3.6T avail, that top pool level or is there another dataset/volume eating into the pool space and your looking only at mounted vmdata dataset/volume availability?
     
    #2
  3. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    626
    Likes Received:
    175
    Try "zfs list".
     
    #3
  4. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the replies, as i have the OS proxmox with 2 disk of 500 gigs RAID 1 and 4 disks with RAID 10 2TB each. This is my list

    Code:
    root@prometheus:~# zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    rpool                 14.9G   435G    96K  /rpool
    rpool/ROOT            6.35G   435G    96K  /rpool/ROOT
    rpool/ROOT/pve-1      6.35G   435G  6.35G  /
    rpool/data              96K   435G    96K  /rpool/data
    rpool/swap            8.50G   437G  5.79G  -
    vmdata                 618G  1.59T    96K  /vmdata
    vmdata/vm-100-disk-1   713M  1.59T   713M  -
    vmdata/vm-101-disk-1  24.5G  1.59T  24.5G  -
    vmdata/vm-101-disk-2   498G  1.59T   498G  -
    vmdata/vm-102-disk-3  42.5G  1.59T  42.5G  -
    vmdata/vm-102-disk-4  5.66G  1.59T  5.66G  -
    vmdata/vm-103-disk-1  14.6G  1.59T  14.6G  -
    vmdata/vm-104-disk-1  16.2G  1.59T  16.2G  -
    vmdata/vm-105-disk-1  16.1G  1.59T  16.1G  -
    
    Thank you
     
    #4
  5. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    626
    Likes Received:
    175
    Weird.. is autoexpand turned on?
     
    #5
  6. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply by default isnt it always off?

    Code:
    root@prometheus:~# zpool get all | grep autoreplace
    rpool   autoreplace                 off                         default
    vmdata  autoreplace                 off                         default
    
     
    #6
  7. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    834
    Likes Received:
    83
    zpool status should show some details about the exact topology.
    maybe by accident you setup something else than a stripe over two mirrors.

    edit: Sorry, missed the links to the screenshots
     
    #7
  8. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply, how can i check what exactly i setup? i could swear i did setup correctly as i ran this
    Code:
    zpool create -f -o ashift=12 vmdata mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
    Then i export the pool to import it again with disk by id

    after that one of disks died and i did resilver maybe somewhere along the lines it broke?

    Thank you
     
    #8
  9. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    834
    Likes Received:
    83
    usually with zpool Status, what was in one of the Images and looks good.
    zpool list should give a summary of all pools.
     
    #9
  10. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply, no worries i should of put the code rather then the pic, but if the zpool status is fine howcome it seems to only show as if there was 3 disks total amount when i should be getting around 3.2tb?



    Code:
      pool: vmdata
     state: ONLINE
      scan: resilvered 7.68G in 0h22m with 0 errors on Sun Jan  7 12:03:31 2018
    config:
    
            NAME                               STATE     READ WRITE CKSUM
            vmdata                             ONLINE       0     0     0
              mirror-0                         ONLINE       0     0     0
                ata-TOSHIBA_HDWD120_675JZ8LAS  ONLINE       0     0     0
                ata-TOSHIBA_HDWD120_672RYH6AS  ONLINE       0     0     0
              mirror-1                         ONLINE       0     0     0
                ata-TOSHIBA_HDWD120_672RW9ZAS  ONLINE       0     0     0
                ata-TOSHIBA_HDWD120_672SBE7AS  ONLINE       0     0     0
    
     
    #10
  11. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    834
    Likes Received:
    83
    How about the capacity shown in 'zpool list' ?
     
    #11
  12. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply, as i see an something with expand? 1.36tb?

    Code:
    root@prometheus:~# zpool list
    NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
    rpool    464G  12.2G   452G         -     2%     2%  1.00x  ONLINE  -
    vmdata  2.27T   621G  1.66T     1.36T    14%    26%  1.00x  ONLINE  -
    
     
    #12
  13. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    626
    Likes Received:
    175
    I know it's not exactly the same situation, but have a look here...

    Growing ZFS pool

    Perhaps one of those commands will help.
     
    #13
  14. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply so it seems i need to turn on auto expand and it seems that i have some expandable size


    Code:
    root@prometheus:~# zpool get expandsize vmdata
    NAME    PROPERTY    VALUE     SOURCE
    vmdata  expandsize  1.36T     -
    
    the part im somewhat confused is where he says on the tutorial

    Would i have to do this on all the disk which are on the pool of vmdata? something like this

    First

    Code:
    zpool set autoexpand=on vmdata
    Then

    Code:
    zpool online -e storage ata-TOSHIBA_HDWD120_675JZ8LAS
    zpool online -e storage ata-TOSHIBA_HDWD120_675JZ86AS
    zpool online -e storage ata-TOSHIBA_HDWD120_675JZ8ZAS
    zpool online -e storage ata-TOSHIBA_HDWD120_675JZ87AS
    do i need to reboot after? do i have to turn off the vms?

    Thank you
     
    #14
  15. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,684
    Likes Received:
    821
    I'm still scratching my head wondering how you got in this situation to begin with. In 10+ years of ZFS use I've never managed to get this far down the rabbit hole. :-D

    GL w/ gettin' on, we're rooting for you!
     
    #15
  16. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    834
    Likes Received:
    83
    think replacing smaller hdd with larger ones when autoexpand=off can do this.

    @Albert Yang you should be fine doing this online. Reboot will not be necessary and running vm's imho shouldn't be affected besides maybe in terms of performance.
    But just for the case i'd suggest either moving vm's to another storage/pool and/or make a good backup.
     
    #16
  17. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    @whitey well this happened because originally i installed proxmox on the 1.8tb to see why the zfs raid 1 was not working until finally figured it out it was the NIC the issue but it was too late and did not realize i mixed up the 500 gigs with the 1.8tb it was a disaster so moving them up and down until i got it right. then as what @_alex said hes right, so running these steps from above should be good?

    Thank you
     
    #17
Similar Threads: showing correct
Forum Title Date
Linux Admins, Storage and Virtualization Balloon Service not showing correct RAM on Proxmox? Oct 26, 2017

Share This Page