ZFS Pool shows much less disk space - disproportionate to physical disk space

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by new2VM, Mar 19, 2020.

  1. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    I have built a NAS server using following:

    • HPE Gen 10 Proliant Microserver,
    • VMware ESXi 6.7 update 3
    • Napp-it ZFS appliance with OmniOS (SunOS napp-it030 5.11 omnios-r151030-1b80ce3d31 i86pc i386 i86pc OmniOS v11 r151030j)
    I have 2 4TB hard disks in the ZFS pool. I am using mirror. So I expect to have 3.xx TB of disk space available for use. Close to 4TB. But I am instead getting only 1.6TB. I have not done any configuration changes at all - I just deployed the Napp-it appliance as it comes. Can anyone please help clarify what the issue is? Attached image is the screen shot showing the pool and the Pool status

    Thanks for your help

    gls8k.png
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,366
    Likes Received:
    793
    This is only an expected behaviour with 4TB disks on a controller that supports only max 2TB disks. In this case onle 2TB are seen by an OS.

    Are the disks on an HBA in pass-through mode (the suggested method) or otherwise offered by ESXi?
     
    #2
  3. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    Pretty sure that model has either a HPE Smart Array E208i-p SR Gen10 Controller or a Marvell 88SE9230 PCIe to SATA 6Gb/s Controller (assuming nothing was added in).

    Either of those is new enough it shouldn't be plagued by the 2TB limit, and I know those units support at least 4TB drives natively.
     
    #3
  4. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0

    Hi Gea,

    I have not configured the disks in HBA pass-through in BIOS or otherwise. I was not able to configure the disks as pass-through in ESXi - this option was grayed out OR Pass-through was not supported. Instead I had come across a post few years back to configure disks as RDMs and assign to the guest (Napp-it)

    Besides, since Napp-it can confirm the disk capacity as 4TB (please see picture above), doesn't it mean that the limit of 4TB is not an issue? Am I missing anything?

    Thanks for your response.
     
    #4
  5. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Hello Spartacus,

    Yes, I think you are right.

    HPE does support 4TB disk - as per HPE Gen 10 specification. I also checked the HPE Smart Array controller specification - there is no mention of any limit.

    thanks
     
    #5
  6. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Hi Gea,

    I am also attaching the screen shot from ESXi Client page that shows the guest (Nappit) configuration. I have circled the storage, Hard disk 1 and 2 that shows 4TB + 4TB but the ZFS as 1.4 TB

    thanks



    Gen10_NAS.png
     
    #6
    Last edited: Mar 22, 2020
  7. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    I did find this statement googling around per one of the comments in this guide: How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

    Maybe try that?
     
    #7
  8. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,366
    Likes Received:
    793
    Raw disk Mapping for Sata disks is not a supported method in ESXi. If it works, ok but it may not. On problems you must use a by VMware supported method.

    1. HBA passthroufg
    The typical AiO config is a small Sata SSD to put ESXi and the storage VM onto. Then you need a second controller in pass-through mode. The gold standard for such a conroller is a HBA with an LSI 2008,2307 or 3008 chipset with an IT or IR firmware. You can get them used for cheap.

    2. RDM Disk passthrough
    This is supported when you use an SAS HBA (same models as you would use for controller passthrough).

    summary
    ZFS shows 4 TB as the disk reports so. You can only use 2 TB as this is what is possible with unsupported Sata RDM in your case.
     
    #8
  9. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0

    Hello Spartacus,

    Thanks for the inputs. I have already done these steps to a large extent. I also checked the Advanced settings, and RdmFilter.HbalsShared is already set to TRUE. Couple of differences are:
    • I am using *rdm* in the name - although I did not get any error. And
    • Compatibility mode is Physical instead of virtual
    I will try to change this - but I have close a TB of data already - I need to back it up before I make any changes... I do not know if this change is destructive and I lose the data on the disk

    I will update once I make these two changes.

    Gen10_NAS_disk_details.png
     
    #9
  10. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Hello Gea,

    Thank you for your inputs. I looked into the controller hardware on ebay. They are indeed cheap. If the config changes I am making do not help - I will by this and try. Thanks for your help
     
    #10
  11. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Hello Gea, Spartacus,

    I just noticed the output of ls -l /vmfs/devices/disks on the host console. The screenshot is below. You can see there are two lines for each of 4TB hard disks. What does WD2DWCC7K1YPE7LN:1 line mean

    Gen10_NAS_disk_details2.png
     
    #11
  12. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    Pretty sure thats just part of the drive identity/name derived from the model number (it starts with t.10).
    The partition for the given drive has the :1, :2, :3 etc, so it looks like only 45% of your 4TB drives are allocated to the partition.
    I have no recommendations out of my knowledge space, sorry!
     
    #12
  13. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Can I delete the partition** - WD2DWCC7K1YPE7LN:1? Using the command:

    partedUtil delete "/vmfs/devices/disks/DeviceName" PartitionNumber

    If I did - will the disk space increase automatically?

    **I have not created any partition on purpose. But I may have accidentally created one
     
    #13
  14. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    Not without risking the data, the safest thing you ‘might’ be able to do would be to pull the drive and boot to a gparted usb/disk and use it to extend the partition perhaps. That doesnt solve figuring out why that did it in the first place though.
     
    #14
  15. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Hello Spartacus,

    Many thanks for your support. The above link in combination with deleting a partition I had accidentally created helped resolve this. I am now able to see close to 4TB space in a ZFS pool with 2 4TB disks in Mirror.

    I had to blow everything away after taking a backup.

    Thanks again to you and also to Gea for all advice and help!!

    Gen10_NAS_disk_details3.png
     
    #15
  16. Spartacus

    Spartacus Active Member

    Joined:
    May 27, 2019
    Messages:
    387
    Likes Received:
    124
    sweet glad ya got it sorted
     
    #16
  17. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,366
    Likes Received:
    793
    about the 3.6T

    ZFS use T (Tebibyte) and not TB (Terabyte) like disk manufacturers do.
    4 TB = around 3,6T

    Tebibyte - Wikipedia
     
    #17
  18. new2VM

    new2VM New Member

    Joined:
    Oct 13, 2013
    Messages:
    15
    Likes Received:
    0
    Thanks for clarification Gea!!
     
    #18
Similar Threads: Pool shows
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Best way to access storage pool with Linux (Ubuntu Server) Aug 22, 2019
Solaris, Nexenta, OpenIndiana, and napp-it PLEASE DELETE: OmniOS: 'zpool set autoexpand=on poolname' not working Jul 7, 2019
Solaris, Nexenta, OpenIndiana, and napp-it NAPP-IT - Moving whole pool - DATAMOVER or no? May 29, 2019
Solaris, Nexenta, OpenIndiana, and napp-it Pool Degraded - Help! May 25, 2019
Solaris, Nexenta, OpenIndiana, and napp-it ZFS Pool Degraded -> Unavail Feb 21, 2019

Share This Page