How much ZFS is in Qnap ZFS ??

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,478
1,363
113
DE
Short answer: not too much
and the few is different


On my way from (Open)Solaris, its free forks to Open-ZFS on BSD, then Linux, I have accepted that every step reduces the easyness and stability of a Sun Solaris with one ZFS version on one OS, managed with two commands, with OS/ZFS kernelintegrated iSCSI, NFS and SMB to a bunch of distributions, each with a different Linux, different ZFS release or SAMBA and different setup or update methods or bug states and the worst are special ZFS APIs + databases instead the simple and always up to date zfs and zpool commands.

And now Qnap
When I tried to check out if a Qnap ZFS NAS can be a managed member of a napp-it cs ZFS serverfarm among other ZFS servers on Linux, Unix or Windows, I did not expect problems as I thought Qnap is simply another Linux + Open-ZFS + Web-gui appliance. It is not.

Setup of napp-it_cs on Qnap was easy. After installing Perl from the Qnap repo it worked but only to show some ZFS stats. ZFS manipulation was not possible, nor disks shown as Qnap lacks lsblk to show disks.With the help of p. who has a spare Qnap i tried to find out which Open-ZFS version is used (for compatibility). A zpool version only shows 5000 instead for exemple Open-ZFS 2.2.x as expected, It seems Qnap wants to hide the ZFS release.

A zpool get all shows that it must be a quite old Open-ZFS without any of the newer ZFS features like for example zstd, allocation_classes, checkpoints, device_removal, raidz_expansion or draid, needed for compatibiliity (pool move, replication) with other Open-ZFS systems

My next thougt was, this cannot be as I remembered that Qnap proudly announced Raid-Z Expansion a year ago, long befor it became stable in Open-ZFS 2.2.3. I also remembers that Qnap calls the Raid on ZFS Raid 5/6. I thought first this was an error of someone translating the docs but now I checked again and it seems indeed that Qnap is using ZFS ontop Raid 5/6 with a huge amount of unique global and private ZFS properties. A Pool move from/to Qnap is propably impossible and you loose auto repair due checksum errors and safe atomic writes or in the end just like ZFS ontop hardware raid.

On Qnap docs I found: QNAP Flexible Storage Architecture | QuTS hero 5.0.x

This is in no way a ZFS layout, more like a modified mdadm setup with some extras. It is clear now, why Qnap called its feature raidz_expand while the Open-ZFS feature has the name raidz_expansion. It is not the same feature but a transformation for a Raid 5/6. This repeats with many other features that are Qnap only while all newer fancy Open-ZFS features are missing.

I will no disqualify. Synology did the same with btrfs ontop Linux mdadm raid that therefor lacks modern filesystm features like self healing filesystems or safe atomic writes due Copy on Write with btrfs/ZFS Raid instead oldstyle Raid 1/5/6. Such helps for the easy switch of an older ext4 GUI concept to btrfs or ZFS.

Remains the question: Who wants this half baked solutions without newer core ZFS advantages.
ZFS is printed ontop the box in huge letters and you may expect the real ZFS is inside.

The exact amount of differences to a Open-ZFS philosophy and many internal technical details are not published, but ZFS?
Maybe in name only. It may be quite good, easy to manage and quite safe (but not as safe as regular ZFS configs)

some stats, compare with yours!

Code:
  scan: none requested
prune: never
expand: none requested
config:

        NAME                                        STATE     READ WRITE CKSUM
        zpool1                                      ONLINE       0     0     0
          raidz1-0                                  ONLINE       0     0     0
            qzfs/enc_0/disk_0x1_5000C500E5D7D6AC_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x2_5000C500E5D7D701_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x3_5000C500E5CB2C85_3  ONLINE       0     0     0

errors: No known data errors



[admin@NAS66BC3A tmp]# zpool get all
NAME    PROPERTY                         VALUE                            SOURCE
zpool1  size                             54.4T                            -
zpool1  qsize                            38892364682836                   -
zpool1  orig_qsize                       39196211281920                   -
zpool1  capacity                         0%                               -
zpool1  altroot                          -                                default
zpool1  health                           ONLINE                           -
zpool1  guid                             1560259431169208497              default
zpool1  qguid                            LN3VG3CX                         default
zpool1  version                          -                                default
zpool1  bootfs                           -                                default
zpool1  delegation                       on                               default
zpool1  autoreplace                      off                              default
zpool1  cachefile                        -                                default
zpool1  failmode                         continue                         default
zpool1  listsnapshots                    on                               local
zpool1  autoexpand                       on                               local
zpool1  globalcache                      off                              default
zpool1  globalcache_notuser              off                              default
zpool1  l2rebuild                        off                              default
zpool1  dedupditto                       0                                default
zpool1  dedupratio                       1.02x                            -
zpool1  free                             54.4T                            -
zpool1  allocated                        318M                             -
zpool1  dspace                           35.4T                            -
zpool1  dedup-saving                     1204224                          -
zpool1  max_poolop                       60                               -
zpool1  readonly                         off                              -
zpool1  comment                          -                                default
zpool1  expandsize                       0                                -
zpool1  freeing                          0                                default
zpool1  qthresh                          80%                              local
zpool1  qthreshsize                      31006227903283                   local
zpool1  overqthresh                      no                               local
zpool1  qthreshavail                     7751556975820                    local
zpool1  qsnap                            7751556975820                    local
zpool1  usedbysnapshot                   37654336                         -
zpool1  owner                            0                                default
zpool1  upsecs                           0                                default
zpool1  upsecsupdate                     0                                default
zpool1  throttle                         on                               default
zpool1  raidz_layout                     layout_reorder                   default
zpool1  raidzshift                       24                               default
zpool1  qos_enable                       0                                default
zpool1  qos_max_4kbase                   0                                default
zpool1  qos_weight                       0                                default
zpool1  qos_throttle                     off                              default
zpool1  qos_reserved                     0                                default
zpool1  smartddt_loadratio               0                                default
zpool1  smartddt_txgdirty                0                                default
zpool1  smartddt_txgdirty_low            0                                default
zpool1  smartddt_times                   0                                default
zpool1  smartddt_entrydrops              0                                default
zpool1  resilver_ratio                   50                               local
zpool1  scrub_ratio                      50                               local
zpool1  smartddt_state                   0                                -
zpool1  smartddt                         on                               default
zpool1  l2cache_ioalign                  off                              default
zpool1  indirectlayout                   on                               local
zpool1  shadowminshift                   12                               default
zpool1  shadowashift                     12                               default
zpool1  shadowblockshift                 18                               default
zpool1  zib_size                         34730608794010                   -
zpool1  orig_zib_size                    38606387281920                   -
zpool1  zib_free                         34730553711002                   -
zpool1  zib_allocated                    55083008                         -
zpool1  zib_metasize                     1511828488192                    -
zpool1  zib_metafree                     1511809802240                    -
zpool1  zib_worstamp                     2                                -
zpool1  pool_overprovision               10                               local
zpool1  compdedup_count                  0                                -
zpool1  compdedup_maxcount               8388608                          default
zpool1  compdedup_minpshift              9                                default
zpool1  zib_falloc_size                  1048576                          -
zpool1  zib_falloc_txg                   4                                -
zpool1  ssd_overprovision                0                                default
zpool1  qlog_policy                      legacy                           default
zpool1  prune_goal                       10000000                         default
zpool1  ddt_entry_limit                  20000000                         default
zpool1  prune_slack_txg                  32                               default
zpool1  async_write_min_active           0                                default
zpool1  logvolume                        -                                default
zpool1  async_write_max_active           16                               local
zpool1  async_read_min_active            0                                default
zpool1  async_read_max_active            0                                default
zpool1  sync_write_min_active            0                                default
zpool1  sync_write_max_active            0                                default
zpool1  sync_read_min_active             0                                default
zpool1  sync_read_max_active             0                                default
zpool1  prune_goal_by_ram                on                               default
zpool1  prune_goal_deduced               18500000                         -
zpool1  deadlist metadspace              272800                           default
zpool1  deadlist datadspace              0                                default
zpool1  raidzshift_i                     24                               local
zpool1  aggrprefetch                     off                              default
zpool1  aggrprefetch_maxinit_sz          67108864                         default
zpool1  nomal_class_metadspace           1006901395456                    default
zpool1  nomal_class_datadspace           39818373365760                   default
zpool1  ddt_freq_on_disk                 off                              -
zpool1  ddt_prune_percentage             10                               -
zpool1  ddt_dec_freq_sec                 1209600                          -
zpool1  ddt_prune_min_time_ms            50                               -
zpool1  ssd_life_type                    off                              default
zpool1  shadow_refmap_shift              36                               default
zpool1  shadow_refmap_reserve_shift      28                               default
zpool1  ssdop_size                       0                                -
zpool1  reserved_size                    151397597184                     -
zpool1  asynccow                         on                               local
zpool1  resilver_pause                   off                              default
zpool1  scan_ignore_error                off                              default
zpool1  tag                              QNAPxxxxxxxxx                    local
zpool1  qsal_retention_data              0                                -
zpool1  qsal_retention_freed             0                                -
zpool1  qsal_retention_spill             0                                -
zpool1  vdev_aggregation_limit           0                                default
zpool1  zib_disable_ref_negative_verify  0                                default
zpool1  migrate_error_handle             continue                         default
zpool1  migrate_ratio                    0                                default
zpool1  spacelow_thresh                  8%                               default
zpool1  spacelow_threshsize              3100622790328                    local
zpool1  spacelow_overthresh              no                               local
zpool1  spacelow_threshavail             17179869184                      local
zpool1  spacelow_threshbyte              17179869184                      default
zpool1  feature@async_destroy            enabled                          local
zpool1  feature@empty_bpobj              active                           local
zpool1  feature@lz4_compress             active                           local
zpool1  feature@encryption               enabled                          local
zpool1  feature@raidz_layout             enabled                          local
zpool1  feature@raidz_shift              enabled                          local
zpool1  feature@extensible_dataset       enabled                          local
zpool1  feature@meg_blocksize            enabled                          local
zpool1  feature@sha512                   enabled                          local
zpool1  feature@skein                    enabled                          local
zpool1  feature@edonr                    enabled                          local
zpool1  feature@indirect_layout          active                           local
zpool1  feature@zib_async_destroy        enabled                          local
zpool1  feature@clog                     enabled                          local
zpool1  feature@ddt_prune                enabled                          local
zpool1  feature@raidz_shift_i            active                           local
zpool1  feature@deadlistv2               active                           local
zpool1  feature@zibddt_prune             enabled                          local
zpool1  feature@ssd_lifeop               active                           local
zpool1  feature@indirect_layout_bp_fill  active                           local
zpool1  feature@asynccow                 active                           local
zpool1  feature@large_dir                enabled                          local
zpool1  feature@raidz_expand             enabled                          local


zfs get all zpool1/zfs1
NAME         PROPERTY                       VALUE                                SOURCE
zpool1/zfs1  type                           filesystem                           -
zpool1/zfs1  creation                       Wed Jul 31 22:17 2024                -
zpool1/zfs1  used                           1.03G                                -
zpool1/zfs1  available                      989M                                 -
zpool1/zfs1  referenced                     35.2M                                -
zpool1/zfs1  compressratio                  1.81x                                -
zpool1/zfs1  mounted                        yes                                  -
zpool1/zfs1  quota                          none                                 default
zpool1/zfs1  reservation                    none                                 default
zpool1/zfs1  recordsize                     128K                                 local
zpool1/zfs1  mountpoint                     /share/ZFS1_DATA                     local
zpool1/zfs1  sharenfs                       off                                  default
zpool1/zfs1  checksum                       on                                   default
zpool1/zfs1  compression                    on                                   local
zpool1/zfs1  atime                          rel                                  default
zpool1/zfs1  devices                        on                                   default
zpool1/zfs1  exec                           on                                   default
zpool1/zfs1  setuid                         on                                   default
zpool1/zfs1  readonly                       off                                  default
zpool1/zfs1  zoned                          off                                  default
zpool1/zfs1  snapdir                        visible                              local
zpool1/zfs1  aclmode                        passthrough                          local
zpool1/zfs1  aclinherit                     passthrough                          local
zpool1/zfs1  canmount                       on                                   local
zpool1/zfs1  xattr                          on                                   default
zpool1/zfs1  copies                         1                                    default
zpool1/zfs1  version                        5                                    -
zpool1/zfs1  utf8only                       off                                  -
zpool1/zfs1  normalization                  none                                 -
zpool1/zfs1  casesensitivity                mixed                                -
zpool1/zfs1  vscan                          off                                  default
zpool1/zfs1  nbmand                         off                                  default
zpool1/zfs1  sharesmb                       off                                  default
zpool1/zfs1  refquota                       1G                                   local
zpool1/zfs1  refreservation                 1G                                   local
zpool1/zfs1  primarycache                   all                                  default
zpool1/zfs1  secondarycache                 none                                 local
zpool1/zfs1  usedbysnapshots                219K                                 -
zpool1/zfs1  usedbydataset                  35.2M                                -
zpool1/zfs1  usedbychildren                 0                                    -
zpool1/zfs1  usedbyrefreservation           1024M                                -
zpool1/zfs1  logbias                        latency                              default
zpool1/zfs1  dedup                          off                                  local
zpool1/zfs1  mlslabel                                                            -
zpool1/zfs1  sync                           standard                             local
zpool1/zfs1  refcompressratio               1.81x                                -
zpool1/zfs1  written                        82.7K                                -
zpool1/zfs1  encryption                     off                                  -
zpool1/zfs1  keysource                      none                                 default
zpool1/zfs1  keystatus                      none                                 local
zpool1/zfs1  physicalused                   34.7M                                -
zpool1/zfs1  physicalreferenced             34.6M                                -
zpool1/zfs1  logicalused                    63.1M                                -
zpool1/zfs1  logicalreferenced              62.9M                                -
zpool1/zfs1  txg                            0                                    -
zpool1/zfs1  qthresh                        50%                                  local
zpool1/zfs1  qthreshsize                    536870912                            -
zpool1/zfs1  overqthresh                    no                                   -
zpool1/zfs1  wormtype                       off                                  default
zpool1/zfs1  wormtrigger                    admin                                default
zpool1/zfs1  wormretention                  0y0m1d                               default
zpool1/zfs1  defaultuserquota               none                                 default
zpool1/zfs1  defaultgroupquota              none                                 default
zpool1/zfs1  datatrace                      off                                  default
zpool1/zfs1  wormguid                       8602461902382197696                  -
zpool1/zfs1  wormdiff                       0                                    default
zpool1/zfs1  snapcount                      2                                    local
zpool1/zfs1  redundant_metadata             most                                 default
zpool1/zfs1  fastcopy                       on                                   local
zpool1/zfs1  dedup_activated                off                                  -
zpool1/zfs1  cachewritten                   auto                                 default
zpool1/zfs1  qos_enable                     0                                    default
zpool1/zfs1  qos_max                        0                                    default
zpool1/zfs1  qos_min                        0                                    default
zpool1/zfs1  qos_weight                     0                                    default
zpool1/zfs1  qos_priority                   0                                    default
zpool1/zfs1  qos_burst                      0                                    default
zpool1/zfs1  qos_burst_time                 0                                    default
zpool1/zfs1  qos_allocated                  0                                    -
zpool1/zfs1  wormprivilege                  off                                  default
zpool1/zfs1  wormlog                        no                                   default
zpool1/zfs1  acltype                        richacl                              default
zpool1/zfs1  wormcommit                     none                                 default
zpool1/zfs1  aggrprefetch                   on                                   default
zpool1/zfs1  archives_counting              off                                  default
zpool1/zfs1  thin_shared_byte               32K                                  -
zpool1/zfs1  compact_byte                   0                                    -
zpool1/zfs1  snap_refreservation            0                                    default
zpool1/zfs1  usedbysnaprsrv                 0                                    -
zpool1/zfs1  overwrite_reservation          35.2M                                -
zpool1/zfs1  large_dir                      on                                   local
zpool1/zfs1  large_dir_threshold            9.77K                                default
zpool1/zfs1  clonescount                    0                                    local
zpool1/zfs1  qnap:zfs_volume_name           System                               local
zpool1/zfs1  qnap:zfs_flag                  257                                  local
zpool1/zfs1  qnap:zfs_threshold             50                                   local
zpool1/zfs1  qnap:zfs_guid                  1560259431169208497-1722457073-zfs1  local
zpool1/zfs1  qnap:zfs_edge_cache_type       0                                    local
zpool1/zfs1  qnap:share_path                System                               local
zpool1/zfs1  qnap:snapshot_reserved_bytes   0                                    inherited from zpool1
zpool1/zfs1  qnap:pool_flag                 1                                    inherited from zpool1
zpool1/zfs1  qnap:snapshot_reserved         20                                   inherited from zpool1
zpool1/zfs1  qnap:snapshot_reserved_enable  2                                    inherited from zpool1
 
Last edited:
  • Like
Reactions: TRACKER and pimposh

pimposh

hardware pimp
Nov 19, 2022
391
226
43
What could be disqualifying is the lack of compatibility between Q's so-called ZFS fork and OpenZFS-based systems. In the event of a QNAP hardware failure, it's a no-go to move disks and mount them on another piece of hardware.

While back-ups are always essential, a simple way to physically move a set of drives around should be a no-brainer.
Plus no support for special vdevs given hardware flexibility is also disappointing especially in all these so-called enterprise solutions.
 

gea

Well-Known Member
Dec 31, 2010
3,478
1,363
113
DE
When ZFS was developped by Sun on Solaris, it should handle all possible reasons of a dataloss beside hardware problems (lack of ECC RAM is a hardware problem), software bugs or human errors. This approach is not in Qnap ZFS what is more ZINO (ZFS in name only) than ZFS, the real experience.
 
Last edited:
  • Haha
Reactions: mrpasc

gea

Well-Known Member
Dec 31, 2010
3,478
1,363
113
DE
If found the following from a Qnap support member
https://www.reddit.com/r/qnap/comments/13yi3d5
It states clearly the the Qnap expand feature is not the Open-ZFS Raid-Z expansion but Raid 5/6 expand.
Part of the unclear situation is/was that the terms Raid-5 and Raid-Z are often mixed while they describe technically a whole different method with a different level of data security or failure behaviours. They share only that they allow a disk to fail.
 
  • Wow
Reactions: pimposh