Many thanks for your support. The above link in combination with deleting a partition I had accidentally created helped resolve this. I am now able to see close to 4TB space in a ZFS pool with 2 4TB disks in Mirror.
I had to blow everything away after taking a backup.
Can I delete the partition** - WD2DWCC7K1YPE7LN:1? Using the command:
partedUtil delete "/vmfs/devices/disks/DeviceName" PartitionNumber
If I did - will the disk space increase automatically?
**I have not created any partition on purpose. But I may have accidentally created one
Hello Gea, Spartacus,
I just noticed the output of ls -l /vmfs/devices/disks on the host console. The screenshot is below. You can see there are two lines for each of 4TB hard disks. What does WD2DWCC7K1YPE7LN:1 line mean
Thanks for the inputs. I have already done these steps to a large extent. I also checked the Advanced settings, and RdmFilter.HbalsShared is already set to TRUE. Couple of differences are:
I am using *rdm* in the name - although I did not get any error. And
I am also attaching the screen shot from ESXi Client page that shows the guest (Nappit) configuration. I have circled the storage, Hard disk 1 and 2 that shows 4TB + 4TB but the ZFS as 1.4 TB
Yes, I think you are right.
HPE does support 4TB disk - as per HPE Gen 10 specification. I also checked the HPE Smart Array controller specification - there is no mention of any limit.
I have not configured the disks in HBA pass-through in BIOS or otherwise. I was not able to configure the disks as pass-through in ESXi - this option was grayed out OR Pass-through was not supported. Instead I had come across a post few years back to configure disks as RDMs and assign...
I have built a NAS server using following:
HPE Gen 10 Proliant Microserver,
VMware ESXi 6.7 update 3
Napp-it ZFS appliance with OmniOS (SunOS napp-it030 5.11 omnios-r151030-1b80ce3d31 i86pc i386 i86pc OmniOS v11 r151030j)
I have 2 4TB hard disks in the ZFS pool. I am using mirror. So I expect...
I am able to boot now. My server is up and ZFS is running.
- The main issue seems to be initial configuration of datastore as a VMFS3 that prevented me to add 3 TB RDM. This threw me into all sorts of experiment all of it unnecessary until it lead me to setting...
I thought I found the issue... but still struggling... I made some progress though.
I have two disks in my system. One 250GB that came with the server. The second, 3TB I added.
Initially, I had setup my data store on 250GB disk. When I added the datastore, ESXi, by default assigned...
Thanks for all the replies. Since I posted this question yesterday, I made some headway..
I figured out that passthrough is not suported. So I looked around for help and heavily used from this article:
Creating RDMs on SATA drives
I am planning to use this server I am...