Home · ewwhite/zfs-ha Wiki is the site i was referencing. vsphere and proxmox both assume shared data, I believe. AFAIK, there is no way to do zero-downtime HA storage without some kind of shared storage like this. The ewwhite site uses dual-head HBA/JBOD so that host #2 can fence host #1, forcibly import the pool, and export the NFS share(s). The only solarish HA storage I've seen is a commercial (and very $$$) product, and a very outdated (and likely no longer available) system using heartbeat.
Yeah totes, that's the kind of thing Gea recommended for OmniOS in a PDF docs he has on the napp-it.com site. <shrugs> Even the fancy vSphere "regular" HA says about a minute downtime or more for the nodes to replicate the VM in its last known state and spin it up on the survivor node. Somehow the dual path HDDs never really interested me that much, probably when I did a cursory search for them on eBay and saw how expensive they are.
There's "Fault Tolerant" which runs two copies of the same thing simultaneously, I think that's kind of like CARP or HAST. But none of that deals with shared storage. (wait, nvm HAST IS shared storage, my bad. But who wants to deal with HAST).
So in vSphere, it's stuff like vSAN, starwind vSAN, etc. you can do the old OCFS2, GFS, AFS, lizard, gluster, drbd, blah, buzzword, acronym. If I really wanted to be wedded to ZFS I saw a how-to on putting ZFS on DRBD, but I did some trials with EXT4 on it a while back and found it irritating to set up, immediately had split brain issues, and lost interest.
I think to keep things simple and power-conscious I'm just going to run two nodes and do zfs replications daily. If it goes down, it goes down, I'm not going to worry about it until I put together a kubernetes cluster I'm working on specifically for experimenting with cluster stuff so my "real" equipment I actually
use is completely left out of it.
That's a fun project, I bought a stack of Dell 7050 micro motherboards for like $50 last year, and have slowly been gathering CPUs and memory for each one, duct taping and and stapling them together at a snails pace (twine, elmer's glue, a little glitter-covered macaroni threaded along with colored marshmallows).
In case you're interested in building a mini-cluster, the Dell Micros happen to fit perfectly sideways inside a 4U rack using just the bottom of the case to secure the motherboard (there's no CPU retention otherwise), I got a 3U grate for $20 from sweetwater to attach 3 120mm fans to which should deal with their awful stock cooling issues nicely, was just going to stick the case bottoms + mobos in there with some velcro top + bottom for the time being.
Some of the models have vPro so you can do AMT-based IPMI stuff with them, and if you're REALLY lucky you can find the 3-retention-screw 12v fan header ones that can accept REAL non-T version processors (those are bad af). Hint: their typical 5v header on the 4-screw heatsink model is what I am working on circumventing with my 3x 120mm fan plate, if you get my drift.
Interesting, I hadn't seen your message, but I randomly came across this post on ServerFault:
Just installed LSI 9211; no drives showing up to Linux
saying ewwhite had helped them with their SAS controller
User ewwhite
Re: fencing, I just got this crazy UPS thing running due to an electrician wiring me a new outlet and I haven't even hooked it up yet. It's an APC smx3000rmlv2u I got for $150 on offerup. Dude was practically begging me to take the enormous thing tf out of his place. Has a network card and everything. I've had it around for a year and a half and haven't even used it because it required a 30A socket (the kind of thing for plugging in a dryer or an electric stove).
I should probably try it out, I was reading about it, apparently it supports hardware fencing,, which I wasn't even aware was a thing until I was poking through the Proxmox docs and it said it required "hardware fencing" with a UPS for "real" HA - written about 6 years ago, so not sure if that's true anymore...
Damn, I am rambling, I should probably go to bed but I'm trying to fumble my way through my first iSER setup in LIO/targetcli to connect it to vSphere/vCenter for datastores... I've got a MacOS VM I was running locally on an NVMe I took an image of in Macrium Reflect and restored onto an iSCSI-shared zvol, now trying to get moar powar and turn it up to 11. Fun times.