after a more in-depth review of what went down.. I still can't put my finger on it.
originally, when esxi was failing to boot through and startup napp-it and other vms.. I figured that the consumer class ssd that I was using for the native vmfs datastore for napp-it when bad.. I was wrong I think
I took that ssd and with some tools and trickery mounted it on an ubuntu 16.04 desktop machine and opened it up.. all the VMs were there.. both napp-it and observium which I use to host apcupsd and collect metrics.. all seemed fine..
in fact I scp'd the observium vm back over to the running esxi box .. added it back to inventory and everything it was fine... huh...
I am now thinking perhaps that it just ran out of space.. as the observium vm was a 20gb thin and napp-it is what a 40gb thin in the OVA... so perhaps it just ran out of free space.. esxi does weird things when disks fill.
I am no esxi ninja and log analysis is really not my thing
as for the RDM passthrough pool issue where one disk now shows as attached though a partitions... I did more research and this looks like a bug in napp-it
the original pool was indeed created on a mac running openzfs. it was a stripe of 2x8TB reds and were attached as disk1, disk2. It was created ashift=12 and both devs were ashift12
the third disk, the one in question was added via napp-it gui .. after the pool was moved from a OS X based server and re-imported into napp-it esxi as RDM pass-through disks.
I passed napp-it the RDM disk3 as a blank drive, just as I had the other 2. and used the gui in napp-it to attach it to the pool. zpool history shows that it was attached as a whole disk.. c12d1c0 or something to that effect. also, at the time of the disk add.. napp-it added it as a ashift9 dev.. into a pool that was already a ashift12 .. very strange.. I guess as a DRM it didn't see the 512e 4k native tags properly?
after rebuilding the server from the crash and now passing napp-it the ICH10 controller, ditching the RDM passthrough tags, the drive clearly shows up as attached as a partition and in zpool list, etc, no drive serial or information show up. in napp-it smart data the whole drive smart info is not part of a pool but look like an disk not attached to a zfs system.. and a placeholder of sorts that has no smart info is attached as the 3rd device in that pool on the smart page..
weirdness for sure.. so when napp-it added it to the pool when it was DRM it attached it as a whole disk and all the napp-it disk infos, and smart screens etc worked as expected and showed the 3 disks in the pool as whole drives.. including using command line
after changing to controller pass through.. no longer the case and one drive appears to be attached as a partition in every napp-it menu and also in the command line.
TLDR .. if you are working with RDM disks and napp-it proceed with caution..
pool construction and pool portability may get effected.