My two cents (well, this wall of text is probably more like 2 euros):
I think a lot of the configuration really depends on your use case in terms of complexity or how advanced it needs to be. The more advanced/complex your needs, the more involved the configuration ends up being.
That said, if your usage isn't extreme, then any 'bad' configuration will still work fine. I've had redundant storage on Consumer SSDs for years and out of ~40 4TB Samsung and WD SSDs none have failed in ~6 years of use (I keep having to re-calculate the number as time seems to be moving faster each year...
), which is with small office locations that mostly just don't want to wait for file listings and search has to be fast. They might write a few TB each day in each location, and file-level sync causes additional writes all around. This is both on MDADM+LVM and ZFS.
This doesn't mean it works for everyone (we're talking about SMB setups for SMBs with maybe 4 VMs per storage pools on top of the NAS usage, used by maybe 25 users concurrently), but modern hardware usually outlasts the average user pretty decently.
If you're doing things like hosting many database servers, then you'd probably get into some trouble, but since a ton of data is just read over and over (especially operating system and application binaries) it's not as heavy as you'd think it is. For NAS use we generally see about 17TB written vs. 40TB read per drive per year on that SMB setup. Since writes are spread in almost all scenarios it tends to be a little less weary.
Back to the topic at hand: for VM storage, ZFS zvols are pretty neat, but since you'd be having big chunks of data (from ZFS's perspective) with filesystems on it (from the VM perspective), having to do with file-backed disks and just plain MD RAID10 and LVM isn't all that crazy. You'd lose integrity features, but the in-VM filesystem really should already be taking care of that. If you think about it, anything important should probably be on ZFS, but not on ZFS-in-ZFS (i.e. a host doing ZFS and a guest doing ZFS as well). Anything else shouldn't be on ZFS since it's a waste of space and compute. I do run ext4 on zvols, compression on but dedup off. It doesn't kill the disks as fast as you'd think.
I have a template for single-node appliance compute (using Proxmox and ZFS), there is almost no local data persistence (all done on a NAS elsewhere), only OS, application and hot/cached data. It's a 1TB pool (just a simple mirror) with 2 devices (Samsung SM863a basic enterprise SATA SSD):
power on hours: 47217
wear-out: 3%
LBA written: 282097715172
LBA read: 25842077196
NAND writes: 857246527488
Those nodes host 5 VMs, all Linux, all ext4, 3 have a 4GB Swap disk, there is some constant-activity stuff like a K3S admin node & orchestrator, a network VM, a hardware management thing (mostly RS485 management and data transceiver), a local data pre-processing spark VM, and a general K3S worker node.
If I calculated it correctly, that means (282097715172*512)/(1024^4)=131 TB written, which isn't a lot. But it's only 12TB read! So not-optimal configurations like those are really making a tenfold difference in 'eating disks'. On the other hand, it would need to be an order of magnitude higher before I'd spend time and money on doing anything about this. The hardware will age out before it's worn out... as long as you don't host a NAS-on-ext4-on-zvol.
Personally, on single-node setups I still use ZFS, and just accept that you're not going to get all the endurance and performance an NVMe drive has to offer. I'd easily throw 50% of the capacity, performance and endurance in the trash if it means that I can detect and fix data integrity with ease. Especially when you consider the extreme performance and capacity we get with current-gen hardware. Not too long ago, DDR2 FB-DIMMS and RAID10 HDDs were considered 'good enough', even for 2 or 3 VMs on the same host.
The only difference I make is disk-per-vdev count, for NVMe I tend to make larger stacks (up to 12) than HDDs (up to 8) because the increased bandwidth makes resilvers not as scary, and for HDDs (usually 6 disk raidz2 per vdev) having more vdevs give you more performance, which for non-SSD storage is still important.
For dual-nodes I used to use a cluster FS on top of zvols, and for anything bigger, Ceph is pretty much the next step. (but I don't do dual-nodes anymore, either the work is important enough for real redundancy, or it's not and you just get the one active node at a time with complete disk VM migration if required instead of shared storage)
Do suboptimal configurations sometimes make me sad because we leave performance on the table (and life/usage)? Yes. But when it's about making money it matters a whole lot less than I'd like.