I think it's more than that, that it's just ok for lab or home, especially since the OP mentioned that there is a business element to it in a post. But let's assume, given that we're on this forum, that whatever the hardware or budget, the idea is to do the best with it (excepting crazy complexity/crazy failure home labbing). The minimal, but non zero, downsides of having an added vector in attacking the hypervisor are probably considerably less than having no failover whatsoever. And that doesn't just include hardware failures or LBFO, but potentially security vulnerabilities.
If this is single box and bare metal, then the only way to do, say, a kernel reinstall/module/significant library update requiring a reboot, will mean a choice between data loss/DOS/penetration and breaking a transfer, including archives, or a tunnel or remote sessions. Both are service failures to the needs of a user/users. Now that could be avoided with "in service" virtualization - i.e. VRRP, CARP - and rolling patches, and since we care about the service of moving packets, I'd rather a packet routing aware failover method over general VM clustering, but I'll take both.
But for a virtual gateway IP, which also means not waiting for clients to fail traffic until they go to the next router IP in their DHCP lease (if ever), and with a single box, that means VMs.
(And if there are two inband egress gateway IPs, why aren't they running as the same inside IP anyway, and two IPs likely means two boxes, so it's moot as to what to do with a single box).
Also, if they are doing VRRP load balancing, well the reason I went VyOS over pfSense, aside from experience with EdgeOS and preferences to a more "serious" heritage and smaller footprint, was pfSense/pf single threadedness, though I understand that has changed this year. Prior to that, the only way to leverage a multicore CPU, which is everything these days, for the primary and often latency dependent job of routing, would have been single vcore VMs in LB. But I guess that's historical now, assuming that the "new" pf is well tested.
Of course, better is two boxes and then there really is a debate between "two bare metal installs doing VRRP" or "two special purpose type-1 hypervisors with VRRP enabled routers." Note that I'm not talking about failing over to internal VM host clusters, or running other services on them, though I think if it's the only way to do an IDS too, it's a positive trade off. All that is certainly doable - Amazon doesn't have special racks for the virtual routers providing multitenancy for cloud compute; it sort of defeats the purpose, let alone google running user sessions in containers, not even VMs - but that is "factory" level economy of scale with teams upon teams of people managing an unending perimeter.
And that ties in nicely with the last thought I had - with all the baked in ASIC appliances we trust to doing L2 VLAN segmentation or L3 routes and firewalls, with lowest cost expenditures in fixed, SMD soldered down RAM and storage, and being a custom platform that only one team in a specific company can support... between the virtual switch code in ESXi, KVM, Xen, or Hyper-V, or the code in maintenance phase (with the equivalent staff reduction in the dev team) for a years old Broadcom embedded device like my 10G Dells, who do you expect to cover the next Heartbleed quicker, or be less likely to overflow something, lose Q tags and start putting BYOD wifi -or server- traffic through untagged on a trunk port? Everyone has tablets, whether a company bought them or not, and especially if it's their property, good luck getting OS level restriction control of the device. Even if you aren't Amazon, the perimeter and malware and fuzzers are already inside. Which patch release schedule do you want to be on - the x64 vswitch, or the old custom silicon? I'm only talking about the security element of patch frequency/interest and likelihood here, not suggesting replacing switches.
In double checking stuff and grabbing a handful of links, I came (back) across
CloudRouter® | Router Distribution for the Cloud . I totally have to play with it, especially as its out of beta as it was when I first heard about it. More so now that I have a bunch of docker VMs. Not really the type of software/use case we're talking about, but interesting.