@DavidWJohnston I was looking for redundancy/integrity/availability for the storage; if it's hot storage vs. cold storage there would be something to be said for either solution.
As for what would make sense for the entire setup, I'd say the classic single-box-zfs-hypervisor would be the most useful. While virtualisation at the office might certainly be an interesting factor, I'm not smelling any of the classic signs of 'need it to level up' ;-) I'd say that knowing the concepts of virtualisation, block storage and networking gets you 75% of the way there, and the last 25% will be specific to any on-prem, cloud or managed colocation setup. Around here, on-prem is not very popular, generally we're seeing dismissal of one Vmware/Hyper-V engineer per quarter, with the target of having 2 remaining to keep an archive running for 2 to 5 years (cold VM storage for stuff that was migrated away). They are essentially going the same way as the DBA (but not all DBAs are the same, I'm talking about the RDBMS herders, not the application managers ;-) ).
Spending money on disks and CPU and memory would be my priority here, depending on how the data is handled (i.e. if there is backup, since block-level redundancy, regardless of the implementation is not backup). I'd go for either one (soft)raid1 for the hypervisor and passing all disks to a ZFS VM, or for Proxmox-on-ZFS with no (soft)raid at all, only raidz2/raidz3 depending on the vdev size.
As for hardware RAID in general, I've found that they are practically pointless, save for two scenarios:
1. You are using Windows and no hypervisor, on a single node, with no external storage (DAS, NAS, SAN) for some reason
2. You are using some sort of certified/packaged/externally required stack that doesn't allow you to nuke the raid controllers
In pretty much all other scenario's you'll either have a SAN, or you'll use software storage management. The SAN in itself might of course internally do RAID-y things, be it controller redundancy with LUN virtualisation (instead of full multi path), FCoE emulation, or some other shady stuff where some work is being done to abstract the disks away. I'm generally treating it as a black box (and since most of them are delivered and contracted that way... it's not really a choice either) that uses internal magic to do the block replication/redundancy/availability for me. Technically, we could probably argue that a COTS SAN has at least some classic RAID component to it.