Virtualization of pfsense and others, pre check

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
My goal is to consolidate a few boxes onto one -- none are mission critical really so if the VM host I want to migrate them to goes down and takes them too then that's ok.

I have a dedicated (franken-plex box -- that is just janky and needs to get decommissioned) and a few development servers that could just be VMs really and the need for a proper router/firewall.

So the hope is to consolidate this onto my new eBay find:

some old OEM box with a 250w PSU, a few pci-e slots, 8GB of ram, 120GB ssd, and i5 3470 (AES, VT-d, etc).

I've since become a bit enamored with the UI of Redhat's cockpit (An introduction to Cockpit, a browser-based administration tool for Linux) but am partial to ZFS so proxmox comes into the picture. The one benefit of a VM host is it could act as a sort of poor-man's out of band management in that I could put one of the unused Nic's on a static class c network (i.e 192.168.200.1/24) and connect to it via my laptop if I need to if the network goes haywire and provision it that way since none of my hardware has any such thing.

So choice #1: CentOS 8/stream or ProxMox

Step #2

Next I have a tricky migration ahead. I currently have a ISP modem that the ISP can remotely set to bridge mode (process takes a few weeks for them to get back to me) but until then I am ok with double NAT (should that occur) and will try to do the following:

Set the modem/router to 192.168.0.1 with DHCP off?
Set DMZ on to send all packets to 192.168.0.2 -- which should be the PfSense / OpnSense VM
Will Pfsense/Opnsense be ok having the wan Nic having a 192.x IP?
When the Modem/router gets set into bridge mode just restart the VM?

I'm not confident in the above. I mean it should work. Any suggestions here would be nice.

Step/Question #3

What's the best/easiest way to pass through hardware, or is it even necessary these days? Is any additional passthrough complexity translated into faster performance on the vm? would it matter considering I've only a residential gigabit network and my WAN pipe is only 400/20?

If I have a VM doing DHCP, routing, firewall, etc how does the host securely access the web for things like ntp and updates? I assume I'd point the resolv.conf to the VM but how does that all work when the VM only starts when the host does? i.e could the host boot timeout waiting for NTP or something if the pfsense/opnsense vm is not running yet?

Anything else I missed?
 

tubs-ffm

Active Member
Sep 1, 2013
171
57
28
I cannot follow completely your explanation. But I would not virtualize the firewall if this is your door to the world. Keep this on a dedicated hardware.


Especially in the community close to pfsense you get this as re recommendation due to security. Long time ago I migrated my virtualized firewall from esxi back to bare metal, but for another reason. Security was not my concern. My family members weren't happy at all about when the internet was down because I did some changes on the vm host. Now, any change on vm host does not affect internet and WALN in my home network.

Think about how important network-up for your case will be.
 

newabc

Active Member
Jan 20, 2019
465
243
43
For a usual pfsense bare metal box on a gigabit cable internet, including IPS, I will prefer Qotom(usually $200 for bare bone w/ shipping+tax) or a thin client box like HP T730 or Wyse 5070 extended to save some electric bill.
 
  • Like
Reactions: gigatexal

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
FWIW, I have pfSense and OPNsense virtualized on Hyper-V. So I guess what I say isn't very useful for your question. But:
  • I'm running on X11SCH-F with E-2146G and with the standard 40Gbps Mellanox NICs, I get 1G/1G with no problems.
  • I live-migrate VMs around before doing maintenance on hosts, and this has been great. Any disk swaps, UPS replacement/power supplies, etc. -- no worries of outages.
  • At remote sites (Bell Canada) where I use PPPoE, I also spin up additional VMs when I'm doing work on the "production" pfSense to make sure I'm not locked out. Also great if I'm messing with VLANs and screw up the ICX6450's dual-mode stuff with interfaces...
  • The ability to snapshot and rollback to previous configurations has also been nice/in case of emergencies. I accidentally fat-fingered UDP port forwards once and did 1024-65535 for example, which immediately crashed pfSense to the point where I wasn't able to rollback to previous configuration in the GUI -- and it was faster to just rollback to a previous disk image and boot again.
  • Edit: I also consolidated everything onto 2 physical servers to reduce power usage. And I got bit by the Atom C2000 bug with pfSense, and having it virtualized on a smaller number of hosts, with IPMI, etc. was my "solution" to prevent future reoccurrences. I guess I just wanted to say, "Virtualized doesn't mean lower uptime, necessarily" and "You can definitely do 1gbps/1gbps with pfSense virtualized."
Questions:
  1. Yes, pfSense will be fine with "WAN" with 192.168.x.x. As long as your internal network isn't 192.168.0.0/16 and a different range entirely.
  2. Yep, you can restart the VM as a last resort :)
  3. I used Hyper-V NICs without issue. At these speeds I don't think passing hardware thru is useful -- it actually then means you can't live migrate.
  4. Yes, the DHCP/NTP syncs when the pfSense comes up. Most of the time, ntpd just retries; I don't have issues with time sync.
 
Last edited:
  • Like
Reactions: edge and gigatexal

Vesalius

Active Member
Nov 25, 2019
252
190
43
Doing this on Proxmox with pfsense. Would recommend Proxmox over Centos Stream (if you want a stable/reliable redhat clone, which is what Centos 8 was supposed to remain, look into Almalinux). Agree with above on the questions 1-4. I have used passthrough as well as SRIOV for wan and lan connections to pfsense in the past but currently use a proxmox Linux bridge and Virtio. I can saturate my 1 gig symmetrical connection either way.

Using passthrough or SRIOV will depend a lot on the MB bios and whether it has the required settings to do so. It's not required though.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
For a usual pfsense bare metal box on a gigabit cable internet, including IPS, I will prefer Qotom(usually $200 for bare bone w/ shipping+tax) or a thin client box like HP T730 or Wyse 5070 extended to save some electric bill.
I'm somewhat partial to a more DIY type solution in a scenario like this. As an e.g, I bought this a month or two ago (for a different purpose).

Screen Shot 2021-02-11 at 10.55.39 AM.png

This included an ITX board, a CPU and 8GB RAM...all for less than $50 shipped. This one had a CPU that didn't have AES-NI, but I've bought similar combos with different CPU at roughly the same price. Add a 2 or 4 port gigabit NIC, or a 10g nic and this will demolish any of the SFF PCs that most people consider for pfSense. Granted, you still need a case/PSU, but I'm sure we all have a parts bin... :) Oh, and this takes 12v/19v directly in and idles ~10w. You could power it from a Dell laptop PSU (same receptacle).

Just another option.
 
  • Like
Reactions: gigatexal

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
I'm somewhat partial to a more DIY type solution in a scenario like this. As an e.g, I bought this a month or two ago (for a different purpose).

View attachment 17504

This included an ITX board, a CPU and 8GB RAM...all for less than $50 shipped. This one had a CPU that didn't have AES-NI, but I've bought similar combos with different CPU at roughly the same price. Add a 2 or 4 port gigabit NIC, or a 10g nic and this will demolish any of the SFF PCs that most people consider for pfSense. Granted, you still need a case/PSU, but I'm sure we all have a parts bin... :) Oh, and this takes 12v/19v directly in and idles ~10w. You could power it from a Dell laptop PSU (same receptacle).

Just another option.
Amazing find.

with the Linux bridge option and proxmox how do you ensure the host is protected via pfsense or do you? @Vesalius
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Amazing find.

with the Linux bridge option and proxmox how do you ensure the host is protected via pfsense or do you?
I don't. A firewall...:) should really be native to hardware. At ~10w I can't imagine that the decision to virtualize it is based on power consumption. So, physical it is!
 
  • Like
Reactions: abq and gigatexal

sovking

Member
Jun 2, 2011
84
9
8
Just to make the things more complicated... What is needed if instead of virtualizing pfSense in single host, you are using a cluster of servers where you want to setup a couple of pfSense in HA. These pfSense VM, as other VM can be moved/migrated from one host to another.

Let's think about vSphere enviroment. What is needed ? Distribuited vSwitch ? How many ?
Do you have to physically connect public/Internet NIC and local network nic on every hosts, or you can connect them to a switch where all the host are connected ? What is your suggested setup ?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Just to make the things more complicated... What is needed if instead of virtualizing pfSense in single host, you are using a cluster of servers where you want to setup a couple of pfSense in HA. These pfSense VM, as other VM can be moved/migrated from one host to another.

Let's think about vSphere enviroment. What is needed ? Distribuited vSwitch ? How many ?
Do you have to physically connect public/Internet NIC and local network nic on every hosts, or you can connect them to a switch where all the host are connected ? What is your suggested setup ?
Umm...you never connect your "public" NICs to ANY hosts in a half serious setup. They are almost always terminated at your switch(es). So, in your scenario, here's what I'd do:

- Get the networking/DHCP/DNS setup correctly by terminating your WAN at the switch level.
- Setup a bare metal firewall first and get it working with the WAN terminated at the switch level.
- Get your virtualization cluster up and running.
- Get your firewall VM running on the cluster without (pfsense) HA, first. (Turning off the bare metal firewall)
- Configure pfSense VM(s) for HA and test.
- Decommission bare metal firewall.
 
  • Like
Reactions: gigatexal

sovking

Member
Jun 2, 2011
84
9
8
Umm...you never connect your "public" NICs to ANY hosts in a half serious setup. They are almost always terminated at your switch(es). So, in your scenario, here's what I'd do:

- Get the networking/DHCP/DNS setup correctly by terminating your WAN at the switch level.
Yes, of course. And I agree about the remaining steps.

So your scheme would be like the following, where the "public" WAN enter into the switch and from here the connection to the cluster of servers

Cattura.PNG

but you now the two firewall VMs can be on any server of this cluster, and the same for the rest of the VMs.
The firewall VMs have internet traffic coming from a NIC connected to the switch, and any server have at least such NIC.
Such NICs have to be linked to a distributed virtual switch ? or is not needed ?

Then the internal side to the firewall have to linked to a distributed virtual switch ?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Like I was saying, you're jumping about 10 steps ahead.

Try spinning up a virtualization cluster first and just configuring local network access for different VMs in the cluster. The networking/vswitch will become self explanatory.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
I’m an idiot. During the migration I destroyed the power connector of one of my drives for my ZfS mirror. Will try to import it degraded but I’m assuming the drive is lost. That means I think I’ll just go dedicated low power box for the firewall and play with a HA virtualized system later. It seems I could benefit from a managed switch where I could do VLAN tagging.