Home network opnsense physical, virtual or clustered

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

crembz

Member
May 21, 2023
35
0
6
Trying to decide which direction to take with my home network and lab.

Currently have a very basic home network with a bunch of omada gear. Poe switch, 10gb switch, aggregation switch, router and aps.

I need a lab environment considering of esxi, nutanix, openstack, k8s, hyper v. I'll need network services such as DNS, ipam. I'll also be running an self service controller exposed to the wan.

I also need a bunch of home services ... Nas, jellyfin, media stack, omada controller.

I'm looking at using opnsense to replace the omada router. What I can't decide on is how to deploy this. I was planning on a Lenovo m920q with a 10gb NIC giving 10gb access to the Nas, main workstation and home services. Originally I thought keep it physical and virtualise everything else on a single cluster.

I started thinking why not virtualise all the network services into one proxmox service (m920q), so I can shutdown the main cluster when not in use.

Then I thought why not just throw everything including opnsense into a proxmox cluster and have proxmox manage ha for opnsense.

So question is really is, if virtualising is it worth having a standalone host running all your network services or is clustering everything into a single cluster a better way forward?
 

ketiljo

New Member
Sep 7, 2022
16
7
3
If you share the internet connection with someone else, e.g family etc, it's risky to have the router virtualized in a cluster. I have pfsense running bare metal on a fanless mini PC out of that reason.
 
  • Like
Reactions: zunder1990

crembz

Member
May 21, 2023
35
0
6
If you share the internet connection with someone else, e.g family etc, it's risky to have the router virtualized in a cluster. I have pfsense running bare metal on a fanless mini PC out of that reason.
Thanks for that, what risk is present assuming you're configuring vlans appropriately? I see most recommended PCI passthrough of the NICs which would work for a single virtualised host but breaks migrations between hosts.
 

mach3.2

Active Member
Feb 7, 2022
133
87
28
I see most recommended PCI passthrough of the NICs which would work for a single virtualised host but breaks migrations between hosts.
If you're going to do PCIe passthrough, might as well run it bare metal on a dedicated box because you're not reaping the full benefits of virtualisation in a cluster as you pointed out.

I'd be comfortable running it virtualised in a cluster with paravirtual NICs; your family shouldn't notice downtime unless your entire cluster blows up.
How likely is that, only you can answer.
 

ketiljo

New Member
Sep 7, 2022
16
7
3
Thanks for that, what risk is present assuming you're configuring vlans appropriately? I see most recommended PCI passthrough of the NICs which would work for a single virtualised host but breaks migrations between hosts.
Just in case the cluster goes down or you take it down for some reason. The problem with having the router virtualized is that the internet connection is depeneding on a lot of things to work rather than a single box. Imagine you're away with friends for a weekend, something fails and your wife calls to get Netflix up again because "something" in the basement aint working.
 
Last edited:
  • Like
Reactions: zunder1990

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
If other people rely on the network, what I would recommend is: Keep your ISP's modem/router/whatever default out-of-box experience as your home network. This way family members can call your ISP tech support, reset the modem, and get back up and running without you with the ISP's network and WiFi.

Then build up a lab network that daisy-chains off your ISP's out-of-box experience LAN/WiFi with a static route (if possible) or port forwarding.

With this setup, you can virtualize/cluster anything you want, and if it all breaks, at least internet browsing, printing, will still work for your household.

I live alone most of the time, and I virtualize everything except OOB management. I run pfSense in a VM, and I can migrate the VM between hosts to do hardware changes or shut down some hosts to save power without any downtime. This has been fantastic.

Even though I live alone, my ISP router/modem still serves my home automation, security, and Ring cameras. It always works!
 
  • Like
Reactions: crembz

crembz

Member
May 21, 2023
35
0
6
Some really good point thank you.

I'm really struggling to decide. I was leaning toward bare metal to kiss.

Then I thought why not have a standalone proxmox box serving just the pfsense vm and other network services (omada controller and adguard). Not a lot of added complexity but would consolidate the tools needed for network management into one place.

I'd run everything else on a seperate proxmox cluster.

Seems workable, any thoughts?
 

mach3.2

Active Member
Feb 7, 2022
133
87
28
Some really good point thank you.

I'm really struggling to decide. I was leaning toward bare metal to kiss.

Then I thought why not have a standalone proxmox box serving just the pfsense vm and other network services (omada controller and adguard). Not a lot of added complexity but would consolidate the tools needed for network management into one place.

I'd run everything else on a seperate proxmox cluster.

Seems workable, any thoughts?
I'm in the virtualisation camp so I'd say go for it, but make sure you set it up so you can still manage the proxmox box without a functioning router and DHCP server.
 

crembz

Member
May 21, 2023
35
0
6
I'm in the virtualisation camp so I'd say go for it, but make sure you set it up so you can still manage the proxmox box without a functioning router and DHCP server.
Sure I would usually just have my main workstation with one interface on the same subnet as the proxmox management interface. So basically my native vlan 0 is my physical and virtual network device management incl pfsense lan0, pve management, main workstation. All other systems and clusters are on other vlans.
 

mattventura

Active Member
Nov 9, 2022
448
217
43
I use a virtualized router myself with some of the interfaces being passed through directly, and others are partially passed through with SR-IOV. It's certainly viable, and in some ways can improve uptime - for example, if I need to reboot the router, that takes 10 seconds because it's a VM and doesn't have to go through a real hardware boot process.

That being said, there are a lot of issues with trying to cluster your primary router:
1. If the cluster encounters an issue, and you need a working internet connection to fix said issue, good luck.
2. You need things like your NAT table and DHCP leases shared across the cluster.
3. Many ISPs/modems really don't want to see multiple MAC addresses behind the modem - they expect one single router.

Even if you don't go through the cluster route, is - even if you have snapshots of the router and thus can always get back to a working state, will you be able to get into your VM host if the router is hosed? You have to carefully ensure that you can still get into the host without depending on the router's routing, DHCP, DNS, etc.

#2 and #3 pose a challenge even for physical hardware using VRRP or similar. Consumer ISPs just really aren't set up for a use case of multiple routers acting in a redundant manner. You end up potentially having to set up a third router to handle the connection to the modem, so you've still got a single point of failure but with a lot of extra complexity.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
Lots of good points. A little more detail about what's working for me:

I run a 100G L3 switch which does inter-VLAN routing. My pfSense does Internet routing and lives inside ESXi, has only 1 trunked NIC, and can be migrated to any host, including mini PC low-power hosts.

To save power, I can turn off the 100G switch, and all but 1 low-power host, and a routing-only second pfSense VM will start up automatically (via script) set to the same IPs as the 100G switch, and it takes over routing. There is a watchdog script that handles everything necessary for the transition both directions. It is not 100% seamless, there is an outage about 30s during the switch-over.

This setup has worked amazingly well.