ZFS No Available Space

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Weighted Cube

New Member
Jan 18, 2015
1
0
1
So recently my ESXi drive which hold my VMs died and stupid me didn't have a backup. So I lost my OmniOS VM that was handling my ZFS pool. So to get everything up again, I figured I would try out ZFSonLinux and just import my pool. Upon importing my pool though I found that I couldn't write anything to the pool and that napp-it was saying I had 0 available space when I definitely did prior. zpool status says I have free space, but zfs list and df -h say I have 0. I've tried importing back into a new OmniOS VM and now I'm getting the same issue. I've tried looking into quotas/reservation but those don't seem to be the issue here. Any ideas would be greatly appreciated.

Code:
root@storage:/root# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  39.8G  9.59G  30.2G         -    14%    24%  1.00x  ONLINE  -
tank   13.6T  13.4T   275G         -     9%    98%  1.00x  ONLINE  -
root@storage:/root# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool                          11.1G  27.4G    26K  /rpool
rpool/ROOT                     3.05G  27.4G    23K  legacy
rpool/ROOT/omnios              3.05G  27.4G  2.46G  /
rpool/ROOT/omnios-backup-1      206K  27.4G  1.78G  /
rpool/ROOT/omniosvar             23K  27.4G    23K  legacy
rpool/ROOT/pre_napp-it-16.11f   116K  27.4G  1.76G  /
rpool/dump                     6.00G  27.4G  6.00G  -
rpool/export                     46K  27.4G    23K  /export
rpool/export/home                23K  27.4G    23K  /export/home
rpool/swap                     2.06G  28.9G   550M  -
tank                           10.7T      0   192K  /tank
tank/media                        6.80T      0  6.80T  /tank/media
tank/share                     3.87T      0  3.87T  /tank/share
root@storage:/root# df -h
Filesystem         Size  Used Avail Use% Mounted on
rpool/ROOT/omnios   30G  2.5G   28G   9% /
swap               6.2G  304K  6.2G   1% /etc/svc/volatile
swap               6.2G  112K  6.2G   1% /tmp
swap               6.2G   56K  6.2G   1% /var/run
rpool/export        28G   23K   28G   1% /export
rpool/export/home   28G   23K   28G   1% /export/home
rpool               28G   26K   28G   1% /rpool
tank               192K  192K     0 100% /tank
tank/media            6.8T  6.8T     0 100% /tank/media
tank/share         3.9T  3.9T     0 100% /tank/share
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
If a ZFS pool is completely full, you have the options

- delete snaps if you have one (see napp-it menu snapshots)
- remove a reservation (napp-it does a reservation per default, see menu ZFS filesystems > pool level)
- truncate a large file ex see ZFS Cant rm: No space left on device - SurlyJake Blog

example cat /dev/null > mybigfile or echo 1>./some_file