Fetch Capacity failed efi_alloc_and_init failed.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

yu130960

Member
Sep 4, 2013
127
10
18
Canada
I upgraded to the newest AIO ESXI 6.7 OVA nap-it and imported my zfs pools.

Getting this weird error message that I am not sure what it relates to.

Fetch Capacity failed efi_alloc_and_init failed. Fetch Capacity failed efi_alloc_and_init failed.

I am in the process of reslivering a drive, so I am not sure if it is related. Any insight would be appreciated.

Screenshot 2024-01-06 02.11.34.png
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I have never seen such errors.
It may be that this is not ZFS related but related to the ESXi vmfs filesystem where you put rpool onto.
What happens if you start a scrub on rpool to check validity?
Which ESXi? Does the error persist after a reboot?

Can you redeploy the template ex to a "test vm":
Do you get the same message again?
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
I left the resilvered of a 3tb drive to take place over night and it was at 30% completed and then woke up and it is only 15% completed with 7 hours to go. Weird behavior. Can I stop the resilver and reboot and test or should I let it go for the remaining 7 hours (current estimate).

The rpool is on a VMFS 6, not sure if that's it.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I would wait for the resilver to finish.

Resilver time depend on iops. A near full pool takes longer. You can also check iostat for %wait and %busy conditions. High values without a "offline, too many errors" message may indicate disk problems (ex bad sectors with a high retry rate)
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Through trial and error, I could repeat the error on a fresh napp-it install by pulling faulted drives while the system is running (hot swapping). The system would go crazy and even renamed one of my pools when I exported and reimported the pool to the same name as another pool (that was scary!). Once I realized that it was no longer hot swappable, I shutdown every time I wanted to take out a drive and it completed the resilver job. I am currently running a scrub to double check everything is ok.

As set out below, my system prior to the software refresh (I finally upgraded to ESXI 6.7U3 from ESXI 6.0 and deployed the new napp-it ova) was hot swappable. Is there a setting in OMNIOS to set it for hot swappable mode?

Machine:
1 x Superchasis 846TQ-R1200B / MBD-X8DTH-iF -O / 2 x Intel Xeon L5639 / 48 gigs ECC Ram
3 x IBM M1015 Flashed to 9211-8i IT Mode (P20.00.07.00 Firmware) [All ESXi Pass-Through]
12 x TOSHIBA DT01ACA300 in a 2 x 6 disk raidz2 vdev pool
6 x HGST 8TB in a 1 x 6 disk raidz2 vdev pool
3 x Seagate 500 gig raidz1 pool
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Hot swap capability depends on disk controllers. LSI SAS HBAs are hotswap capable, Sata controllers mostly when you enable hotswap capability in mainboard bios. With an IBM 1015 in passthrough mode hotswap should work.

In OmniOS sata hotswap can be enabled via
set sata:sata_auto_online=1

but this is enabled by default.

btw
I am not aware about problems in ESXi 6.7
Is your IBM 1015 firmware 20.0.0.7? Earlier v.20 are buggy.

A zpool import reads all disk labels.
If you "just remove" active pool members and reinsert, their pool settings remain visible in a pool import with other pool member disks in offline states. Usually you should not "just remove" faulted disks but insert a new one and start a disk > replace or you should remove such hot removed disks prior a pool import action. But beside the old pools visible in pool import, there is no other side effect. You can also simply ignore those pools in unavail state..
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Hot swap capability depends on disk controllers. LSI SAS HBAs are hotswap capable, Sata controllers mostly when you enable hotswap capability in mainboard bios. With an IBM 1015 in passthrough mode hotswap should work.

In OmniOS sata hotswap can be enabled via
set sata:sata_auto_online=1

but this is enabled by default.

btw
I am not aware about problems in ESXi 6.7
Is your IBM 1015 firmware 20.0.0.7? Earlier v.20 are buggy.

A zpool import reads all disk labels.
If you "just remove" active pool members and reinsert, their pool settings remain visible in a pool import with other pool member disks in offline states. Usually you should not "just remove" faulted disks but insert a new one and start a disk > replace or you should remove such hot removed disks prior a pool import action. But beside the old pools visible in pool import, there is no other side effect. You can also simply ignore those pools in unavail state..
It's been a while since I have flashed P20 to the cards, so I will have to check it. Thanks Gea!


Edit: I just checked my firmware on the cards and they were all already running 20.00.07.00, so I'm at a loss as to why hot swapping is making system go crazy. Everything else has been super stable.
 
Last edited:

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Update: Checked the firmware and the card firmware was not the issue as they were already on the latest 20.00.07.00.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I have never seen or heard about problems with hotplug on these LSI HBAs in barebone or passthrough mode so currently I have no explanation.