For instance, one way is to install TrueNAS (or UnRaid) as a VM and assign the HDD RAID array to the TrueNAS. But that means no passthrough of the drives, and also TN throwing all sort of fits about seeing one (logical) drive with no actual serial number.
You've really answered it yourself and I think you are leaning towards don't do it that way but...
Do you guys have any suggestion that would solve my problem with minimal overhead and hassle? If you need additional info do let me know. Many thanks in advance.
I don't know what is or isn't minimal overhead and hassle to you.
And I have no idea what else is installed in your ML350 and I am NOT an expert on HPE gear (especially modern gear) so no idea whether this is even feasible...
I think that if you want to do this and run TrueNAS to present your "bulk storage" whether to ESXI or LAN clients then you are looking at adding another HBA: an IT mode HBA.
stepping away from your question for a moment:
When I build an ESXI system (prod, home prod, semi-prod), I use an HBA capable of at least Raid 1 or 10. Sized for boot and enough performant & sized datastore for the critical must be always on VM's plus some for growth (say 400GB to 1TB). If I will deploy a software raid NAS OS as guest (like TN) then I'll add an IT mode HBA. Running a NAS vm guest is done
all the time and works well so long as you adhere to the rules of your NAS OS.
8 SSDs in RAID 6, and 8 12TB HDDs in RAID 6 both managed via a P816i-a Raid Controller.
The ESXI portion you have in your current configuration.
And back to your request for suggestions.
The big challenge or what you may be asking is I think - Can I use what I have and get what I want that is both reliable and data safe? I think the answer is YES or NO and it all depends on whether you consider the HW raid controller sufficient to protect your data and defining that pesky overhead and hassle conundrum.
Yes
If you feel the HW raid controller is "good enough" then build a nice guest linux vm with any of the variously available web gui management frameworks and go to town with a big honking VHD (or multiple VHD's if you can logically break up your data) and serve it out via NFS. You're probably getting 80% of what you get with a purpose built software distribution albeit with a bit more effort on your part.
NO
A quick check of the ML350 HPE makes me think you have 2 LFF 8 bay units installed. Whether a SAS expander is in play there or not I do not know. Assuming that your -16i is direct wired to the backplanes (a logical guess) then my suggestions is make sure your SSD's are all in the same bay and connection your RAID controller to the one bay with the SSD's.
Add an IT mode HBA, maybe something simple like an LSI 3008 based card (does HP make one?) and wire it up to the other 8 bay with your large cap spinners.
Build your NAS OS VM (if UNR you'll have to pass through a USB root or USB drive), (if TN then build an appropriate sized VHD for the boot pool - no point in 2 VHDs for a ZFS boot mirror since you're already raiding your SSD's) and pass through the IT mode HBA with the attached spinners and follow your NAS OS best practices for the volume.
Note on growth.
Assuming growth will be large cap spinners & If you have the slots in your system to add another IT mode HBA then an -8i is probably sufficient as you can always add another, if not then you probably want a -16i (9305-16i, 9400-16i) at the outset because you'll either add that third LFF 8 bay or you already have it.
Serving a guest vm's storage back to ESXI.
Simplest way is to just to use the network and make sure you don't create a resource deadlock for any vm's or resources you store on the guest vm's storage. This assumes that the ESXI's network connection bandwidth is sufficient for your need.
There is another way though I consider this advanced for most homelabbers. At a high level it involves vswitches without a physical nic associated, a vmknic for ESXI and a virtual nic on the guest and everything being in the same subnet. This configuration worked under 6.5, 6.7, 7. I have not recreated it under 8 but I imagine it will work there too. This technique has been talked about here a couple of times - I think I may have even written it up and posted a description once upon a time - just would need to find the thread. but we can go there if that seems the path you want to go down.