Virtualized pfSense OpenVPN performance tweaking

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lte

Member
Apr 13, 2020
94
40
18
DE
Welcome to the $1,000,000 question topic :D

My OpenVPN install residing on an ESXi 7 host (E5-2695v2, 4C allocated, 8GB RAM) is able to push barely 200Mbps through OpenVPN with Hardware-based VT and IOMMU enabled for the VM. Algorithm is currently AES-128-GCM. Two VMXNET3 NICs are configured.

Given that a decently specced Intel NUC can nowadays achieve nearly a Gigabit throughput, anything I'm missing here or is the CPU/ platform just too old by now (Thinking of Intel Quicksync and the like here. (The platform is not a dedicated VPN server))

Happy to hear your suggestions if my speed is reasonable for the platform or what settings you would always configure (as expected, AES-NI and bsdcrypto in pfSense yielded no performance difference)
 

kapone

Well-Known Member
May 23, 2015
1,106
647
113
This may not address your question directly...

Your firewall (pfSense, opnSense or the likes) is one of those things that is a prime candidate to NOT be virtualized. Don't go all ga-ga over virtualization and think...wait I have xxx CPUs in ESXi!! I can put anything on it! :)

Your firewall (and to a large degree your main storage) should stay on their own hardware. They are designed to maximize their performance that way. Can you make it work if you virtualize them? Sure. Is it optimal? No.
 
  • Like
Reactions: abq

PigLover

Moderator
Jan 26, 2011
3,215
1,572
113
Your bottleneck is likely virtualized NIC drivers in the VM. Virtualized NIC drivers just can't push enough packets fast enough for high-performance network I/O.

Easiest fix would be to passthrough to pass the NICs to the VM so that pfSense has direct control over the hardware. Better solution (since you likely desire to share at least the LAN NIC with other VMs) would be to segment the NIC using SR-IOV and the use passthrough to pass one PV of the NIC to pfSense.

Short of that - don't virtualize your firewall/router. As @kapone already noted, here are lots of other reasons not to do this besides NIC performance.
 
  • Like
Reactions: Aluminat

hlhjedsfg

Member
Feb 2, 2018
38
8
8
34
With proxmox and low-end CPU (E5-2403v2), I'm able to push 500 mbit/s in AES-256-CBC / SHA256 (OPNsense 20.7), the virtual machine config cpu "host" and nic "virtio", and AES-NI box checked in the guest (don't test without).

But agree with kapone, in many case, prefer hardware firewall !
 

zer0sum

Well-Known Member
Mar 8, 2013
881
494
63
Virtualized is absolutely fine...but it's definitely preferable to have NIC's passed through and dedicated either directly or using SRIOV.

I'd also recommend switching to wireguard as it's already faster than ovpn for me, and is only going to get even quicker once kernel mode support arrives :D
 
  • Like
Reactions: itronin

svtkobra7

Active Member
Jan 2, 2017
362
89
28
Virtualized is absolutely fine...but it's definitely preferable to have NIC's passed through and dedicated either directly or using SRIOV.
I've never been able to pull off SRIOV with FreeBSD :(

My OpenVPN install residing on an ESXi 7 host (E5-2695v2, 4C allocated, 8GB RAM) is able to push barely 200Mbps through OpenVPN with Hardware-based VT and IOMMU enabled for the VM. Algorithm is currently AES-128-GCM. Two VMXNET3 NICs are configured.
I run a similar setup and can offer you a bit of a hack to up your speeds, but as per this comment ...

I'd also recommend switching to wireguard as it's already faster than ovpn for me, and is only going to get even quicker once kernel mode support arrives :D
... it shouldn't be necessary for too long.

Bare-metal baseline = 942 / 937 Mbps

virt pfsense config:
  • E5-2690 v2 x 8 vCPUs
  • 8 GB RAM
  • 2x INTL X540-AT2 @ 1Gb
  • 2x VMXNET 3 network adapters
  • dvs / lag
  • ovpn cipher = AES-128-GCM
  • line speed = 1GbE
In reconciling my 400 Mbps speed vs your 200 Mbps, some may be attributable to the 400 MHz edge the 2690 (3.6 GHz max turbo) has over the 2695 (3.2 GHz max turbo), but that seems disproportionate. I'm betting the balance can be optimized against. See second spoiler below.
  • Examples: Uncheck Hardware Checksum Offloading (Disable) = Unchecked / Hardware TCP Segmentation Offloading (Disable) = Checked / Hardware Large Receive Offloading (Disable) = Checked.
Aside from minor tweaks as such, you can achieve a nice boost by configuring multiple ovpn clients, interfaces, and grouping them in a gateway, so instead of 200 Mbps, if you ran 3 ovpn clients, you would see ~600 Mbps (my example = 400 Mbps single and w/3 clients = 800 Mbps). o/c this adds some complexity to your config and really won't be that relevant with wireguard support around the corner. See third spoiler below.





Some reading on the topic: OpenVPN and Multi-WAN
Hope this helps.
 
Last edited: