ZFS Pool shows much less disk space - disproportionate to physical disk space

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

new2VM

New Member
Oct 13, 2013
16
0
1
I have built a NAS server using following:

  • HPE Gen 10 Proliant Microserver,
  • VMware ESXi 6.7 update 3
  • Napp-it ZFS appliance with OmniOS (SunOS napp-it030 5.11 omnios-r151030-1b80ce3d31 i86pc i386 i86pc OmniOS v11 r151030j)
I have 2 4TB hard disks in the ZFS pool. I am using mirror. So I expect to have 3.xx TB of disk space available for use. Close to 4TB. But I am instead getting only 1.6TB. I have not done any configuration changes at all - I just deployed the Napp-it appliance as it comes. Can anyone please help clarify what the issue is? Attached image is the screen shot showing the pool and the Pool status

Thanks for your help

gls8k.png
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
This is only an expected behaviour with 4TB disks on a controller that supports only max 2TB disks. In this case onle 2TB are seen by an OS.

Are the disks on an HBA in pass-through mode (the suggested method) or otherwise offered by ESXi?
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Pretty sure that model has either a HPE Smart Array E208i-p SR Gen10 Controller or a Marvell 88SE9230 PCIe to SATA 6Gb/s Controller (assuming nothing was added in).

Either of those is new enough it shouldn't be plagued by the 2TB limit, and I know those units support at least 4TB drives natively.
 

new2VM

New Member
Oct 13, 2013
16
0
1
This is only an expected behaviour with 4TB disks on a controller that supports only max 2TB disks. In this case onle 2TB are seen by an OS.

Are the disks on an HBA in pass-through mode (the suggested method) or otherwise offered by ESXi?

Hi Gea,

I have not configured the disks in HBA pass-through in BIOS or otherwise. I was not able to configure the disks as pass-through in ESXi - this option was grayed out OR Pass-through was not supported. Instead I had come across a post few years back to configure disks as RDMs and assign to the guest (Napp-it)

Besides, since Napp-it can confirm the disk capacity as 4TB (please see picture above), doesn't it mean that the limit of 4TB is not an issue? Am I missing anything?

Thanks for your response.
 

new2VM

New Member
Oct 13, 2013
16
0
1
Pretty sure that model has either a HPE Smart Array E208i-p SR Gen10 Controller or a Marvell 88SE9230 PCIe to SATA 6Gb/s Controller (assuming nothing was added in).

Either of those is new enough it shouldn't be plagued by the 2TB limit, and I know those units support at least 4TB drives natively.
Hello Spartacus,

Yes, I think you are right.

HPE does support 4TB disk - as per HPE Gen 10 specification. I also checked the HPE Smart Array controller specification - there is no mention of any limit.

thanks
 

new2VM

New Member
Oct 13, 2013
16
0
1
Hi Gea,

I am also attaching the screen shot from ESXi Client page that shows the guest (Nappit) configuration. I have circled the storage, Hard disk 1 and 2 that shows 4TB + 4TB but the ZFS as 1.4 TB

thanks



Gen10_NAS.png
 
Last edited:

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
I did find this statement googling around per one of the comments in this guide: How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

Notes if people have disks more than 2TBs or cannot access the full disk. and are using it as a standalone install and not with vCentre

  1. Set RdmFilter.HbalsShared to TRUE in Advanced Configuration
  2. on the virtual machine add a SATA controller and attach the vmdk to this. you will loose the SMART data though i believe
Maybe try that?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Raw disk Mapping for Sata disks is not a supported method in ESXi. If it works, ok but it may not. On problems you must use a by VMware supported method.

1. HBA passthroufg
The typical AiO config is a small Sata SSD to put ESXi and the storage VM onto. Then you need a second controller in pass-through mode. The gold standard for such a conroller is a HBA with an LSI 2008,2307 or 3008 chipset with an IT or IR firmware. You can get them used for cheap.

2. RDM Disk passthrough
This is supported when you use an SAS HBA (same models as you would use for controller passthrough).

summary
ZFS shows 4 TB as the disk reports so. You can only use 2 TB as this is what is possible with unsupported Sata RDM in your case.
 

new2VM

New Member
Oct 13, 2013
16
0
1
I did find this statement googling around per one of the comments in this guide: How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

Maybe try that?

Hello Spartacus,

Thanks for the inputs. I have already done these steps to a large extent. I also checked the Advanced settings, and RdmFilter.HbalsShared is already set to TRUE. Couple of differences are:
  • I am using *rdm* in the name - although I did not get any error. And
  • Compatibility mode is Physical instead of virtual
I will try to change this - but I have close a TB of data already - I need to back it up before I make any changes... I do not know if this change is destructive and I lose the data on the disk

I will update once I make these two changes.

Gen10_NAS_disk_details.png
 

new2VM

New Member
Oct 13, 2013
16
0
1
Raw disk Mapping for Sata disks is not a supported method in ESXi. If it works, ok but it may not. On problems you must use a by VMware supported method.

1. HBA passthroufg
The typical AiO config is a small Sata SSD to put ESXi and the storage VM onto. Then you need a second controller in pass-through mode. The gold standard for such a conroller is a HBA with an LSI 2008,2307 or 3008 chipset with an IT or IR firmware. You can get them used for cheap.

2. RDM Disk passthrough
This is supported when you use an SAS HBA (same models as you would use for controller passthrough).

summary
ZFS shows 4 TB as the disk reports so. You can only use 2 TB as this is what is possible with unsupported Sata RDM in your case.
Hello Gea,

Thank you for your inputs. I looked into the controller hardware on ebay. They are indeed cheap. If the config changes I am making do not help - I will by this and try. Thanks for your help
 

new2VM

New Member
Oct 13, 2013
16
0
1
Hello Gea, Spartacus,

I just noticed the output of ls -l /vmfs/devices/disks on the host console. The screenshot is below. You can see there are two lines for each of 4TB hard disks. What does WD2DWCC7K1YPE7LN:1 line mean

Gen10_NAS_disk_details2.png
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Pretty sure thats just part of the drive identity/name derived from the model number (it starts with t.10).
The partition for the given drive has the :1, :2, :3 etc, so it looks like only 45% of your 4TB drives are allocated to the partition.
I have no recommendations out of my knowledge space, sorry!
 

new2VM

New Member
Oct 13, 2013
16
0
1
Pretty sure thats just part of the drive identity/name derived from the model number (it starts with t.10).
The partition for the given drive has the :1, :2, :3 etc, so it looks like only 45% of your 4TB drives are allocated to the partition.
I have no recommendations out of my knowledge space, sorry!
Can I delete the partition** - WD2DWCC7K1YPE7LN:1? Using the command:

partedUtil delete "/vmfs/devices/disks/DeviceName" PartitionNumber

If I did - will the disk space increase automatically?

**I have not created any partition on purpose. But I may have accidentally created one
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Not without risking the data, the safest thing you ‘might’ be able to do would be to pull the drive and boot to a gparted usb/disk and use it to extend the partition perhaps. That doesnt solve figuring out why that did it in the first place though.
 

new2VM

New Member
Oct 13, 2013
16
0
1
I did find this statement googling around per one of the comments in this guide: How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs



Maybe try that?
Hello Spartacus,

Many thanks for your support. The above link in combination with deleting a partition I had accidentally created helped resolve this. I am now able to see close to 4TB space in a ZFS pool with 2 4TB disks in Mirror.

I had to blow everything away after taking a backup.

Thanks again to you and also to Gea for all advice and help!!

Gen10_NAS_disk_details3.png