VLANs, Bridges, Proxmox & pfSense issues

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jon Massey

Active Member
Nov 11, 2015
339
82
28
37
I've been having fun reconfiguring my home network the past few days and have hit a few issues. The basic setup is:
  • vlan1 - all trusted "client" traffic for the time being 192.168.0.0/24
  • vlan10 - separate network for proxmox corosync/clustering 192.168.1.0/24
  • vlan30 - traffic that needs to go out via a VPN 192.168.30.0/24
  • vlan50 - guest wifi 192.168.0.50/24
  • vlan200 - NFS storage between proxmox hosts and freenas guests 192.168.200.0/24
  • vlan99 - means of connecting the cable modem WAN to the pfsense vm
  • netcat - proxmox VE 5 host connected to switch via a 1gb bond and a 10gb bond
  • magnificat - pve 5 host connect via 1gb bond
see attached diagram for the desired final configuration.

As it stands, DHCP for vlan1 and routing to the outside world are still being done by my Vigor 2925 router which has its WAN port directly connected to the cable modem and receives its WAN address via DHCP. I'm testing setting up DHCP and routing on the PFSense VM in the vlan30 as a proof of concept which won't disturb any importantcurrently-running services.

What works:
  • PVE hosts (netcat and magnificat) can see each other fine and dandy in both vlan1, vlan30 and vlan10, proxmox clustering is happy
  • Other devices in vlan1 can connect to both the PVE hosts
  • Both PVE hosts can connect to the FreeNAS VM via vlan200 or vlan1 and have mounted NFS shares
  • devices in vlan1 can connect to FreeNAS
  • kitekat container on magnificat can connect to outside world via vlan1 and devices in vlan1 can connect to it (ssh, plex)
  • devices in vlan1 can connect to the pfSense VM via its LAN interface (ssh, web)
What doesn't work:
  • magnificat (PVE host) nor its containers can ping the pfsense's OPT1 vlan30 address (but netcat and the switch can)
  • pfsense is running a dhcp server on the opt1 vlan30 interface but dhclient in kitekat gets nothing (so I've set it statically for now).
  • the arp table on the pfsense box contains entries for the switch, itself, both the pve hosts (all ping-able), and kitekat (not pingable)
  • kitekat cannot ping the switch in vlan30 (but can reach both the pve hosts in vlan30)

Which makes me think there's a problem with the bridging setup on the PVE hosts. As I understood it, the bridges are essentially L2 devices so I shouldn't need to set up any routes on the PVE hosts. If anyone can point me towards how I might further debug my current issues, I'd be extremely grateful.

Host bridging, vlan and bonding set up as per: Network Model - Proxmox VE

magnificat interfaces file:
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0f0 inet manual
iface enp1s0f1 inet manual

auto bond0
iface bond0 inet manual
    slaves enp1s0f0 enp1s0f1
    bond_miimon 100
    bond_mond 802.3ad

#auto enp1s0f1
#iface enp1s0f1 inet static
#    address  192.168.1.1
#    netmask  255.255.255.0
##Clusternet

auto vmbr0
iface vmbr0 inet static
    address  192.168.0.117
    netmask  255.255.255.0
    gateway  192.168.0.1
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

auto vlan10
iface vlan10 inet manual
    vlan_raw_device bond0

auto vmbr2
iface vmbr2 inet static
    address 192.168.1.1
    netmask 255.255.255.0
    network 192.168.1.0
    bridge_ports vlan10
    bridge_stp off
    bridge_fd 0
    #post-up ip route add table vlan10 default via 192.168.1.254 dev vmbr2
    #post-up ip rule add from 192.168.1.0/24 table vlan10
    #post-down ip route del table vlan10 default via 192.168.1.254 dev vmbr2
    #post-down ip rule del from 192.168.1.0/24 table vlan10
#clusternet

auto vlan200
iface vlan200 inet manual
    vlan_raw_device bond0

auto vmbr1
iface vmbr1 inet static
    address  192.168.200.254
    netmask  255.255.255.0
    bridge_ports vlan200
    bridge_stp off
    bridge_fd 0
    #post-up ip route add table vlan200 default via 192.168.200.1 dev vmbr2
        #post-up ip rule add from 192.168.200.0/24 table vlan200
        #post-down ip route del table vlan200 default via 192.168.200.1 dev vmbr2
        #post-down ip rule del from 192.168.200.0/24 table vlan200
#storagenet

auto vlan30
iface vlan30 inet manual
        vlan_raw_device bond0
auto vmbr30     
iface vmbr30 inet static
        address  192.168.30.1
        netmask  255.255.255.0
        network 192.168.30.0
        bridge_ports vlan30
        bridge_stp off
        bridge_fd 0
        #post-up ip route add table vlan30 default via 192.168.30.254 dev vmbr30
        #post-up ip rule add from 192.168.30.0/24 table vlan30
        #post-down ip route del table vlan30 default via 192.168.30.254 dev vmbr30
        #post-down ip rule del from 192.168.30.0/24 table vlan30
#VPN
netcat's interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp3s0 inet manual

#allow-hotplug enp1s0f0

auto enp4s0
iface enp4s0 inet static
    address  192.168.1.2
    netmask  255.255.255.0
#clusternet

auto enp1s0f2
iface enp1s0f2 inet manual

auto enp1s0f3
iface enp1s0f3 inet manual

auto bond1
iface bond1 inet manual
    slaves enp1s0f2 enp1s0f3
    bond_miimon 100
    bond_mode 802.3ad

auto vmbr1
iface vmbr1 inet static
    address 192.168.0.119
    netmask 255.255.255.0
    gateway 192.168.0.1
    bridge_ports bond1
    bridge_stp off
    bridge_fd 0
#un-vlan 10g bond

auto vlan99
iface vlan99 inet manual
    vlan_raw_device bond1
#WANvlan

auto vlan10
iface vlan10 inet manual
    vlan_raw_device bond1
#CLUSTERNETvlan

auto vlan30
iface vlan30 inet manual
    vlan_raw_device bond1
#VPNvlan

auto vlan50
iface vlan50 inet manual
    vlan_raw_device bond1
#PUBLICvlan

auto vlan200
iface vlan200 inet manual
    vlan_raw_device bond1
#STORAGENETvlan

auto vmbr0
iface vmbr0 inet static
    address  192.168.0.118
    netmask  255.255.255.0
    gateway  192.168.0.1
    bridge_ports enp3s0
    bridge_stp off
    bridge_fd 0
 

Attachments

BlueLineSwinger

Active Member
Mar 11, 2013
177
68
28
So... many... VMBRs...

When I set up my server I used Open vSwitch instead. I only need one VMBR, and the VLAN(s) used by each guest can be configured to their NIC(s) instead of being determined by what VMBR they're connected to. I'm running ~ half a dozen VLANs through a single VMBR.

Installation is a single package, and setup is simple. These are the notes I took when configuring it via the GUI:
  • OVS Bond (two Gb ports)
    • name: bond0
    • mode: lacp balance-slb (MAC)
    • OVS bridge: vmbr0
    • slaves: eno1, eno2
  • OVS Bridge
    • name: vmbr0
    • bridge ports: bond0, vlan99
    • autostart
  • OVS IntPort (for management)
    • name: vlan99
    • OVS bridge: vmbr0
    • VLAN tag: 99

The resulting '/etc/network/interfaces':
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
        ovs_bonds eno1 eno2
        ovs_type OVSBond
        ovs_bridge vmbr0
        ovs_options lacp=active bond_mode=balance-slb

auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 vlan99

allow-vmbr0 vlan99
iface vlan99 inet static
        address  172.16.99.10
        netmask  255.255.255.0
        gateway  172.16.99.1
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=99

Other than that, the only thing I can think of is to either eliminate the 1 Gb bonded link on 'netcat', or make it a dedicated management link. Run everything else through the 10 Gb link.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Another vote for OVS. Makes your setup much easier to manage. I can't explain the particular problems you are having without really going in depth on all your configs - but I know that "easier to manage" is also "easier to debug".
 
  • Like
Reactions: Jon Massey

Jon Massey

Active Member
Nov 11, 2015
339
82
28
37
OVS sounds good, I used bridges as it was recommended by the PVE docs. I'll give it a try and will post the results!

Sent from my A0001 using Tapatalk
 

Jon Massey

Active Member
Nov 11, 2015
339
82
28
37
By Jove, chaps, that's bloody done it! What a joy to configure compared to a billion bridges - thanks a million!
 

BlueLineSwinger

Active Member
Mar 11, 2013
177
68
28
Cool, glad it works.

Unfortunately Proxmox seems to lag on updating a lot of their docs, and often doesn't put newer, better solutions such as OVS up front to make them easier to discover. Unless maybe there's some performance penalty I'm not aware of, OVS really should be the standard networking setup for new installs.
 

Kybber

Active Member
May 27, 2016
138
43
28
48
Cool. Is there anything one should be concerned with when switching from Proxmox's default setup to OVS on a running system? In my case it's a single home server, so I am not worried about some downtime for services.
 

Jon Massey

Active Member
Nov 11, 2015
339
82
28
37
The instructions on the wiki (Open vSwitch - Proxmox VE) and the config provided by BlueLineSwinger should hopefully be enough to get you started. Just MAKE SURE YOU INSTALL OPENVSWITCH before you set the config and restart the networking service otherwise you'll lose connection to the host. But who would be silly enough to do such a thing, though... :oops:

I successfully converted both of my hosts over, and now all containers and VMs are behaving as I expect, from a networking standpoint
 

Kybber

Active Member
May 27, 2016
138
43
28
48
Thanks! I'm not too worried about losing connection to the host since it's in my basement. As long as I can easily get existing VMs and containers up and running after the switch, I am happy :)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Cool. Is there anything one should be concerned with when switching from Proxmox's default setup to OVS on a running system? In my case it's a single home server, so I am not worried about some downtime for services.
The only real concern would be a small performance hit in packet processing compared to Linux Bridge. You'd likely only notice at greater than a million pps (i.e., 10Gbe links with small packet applications). For home use nothing to worry about.
 
  • Like
Reactions: Kybber

I_D

Member
Aug 3, 2017
83
20
8
113
Just MAKE SURE YOU INSTALL OPENVSWITCH before you set the config and restart the networking service otherwise you'll lose connection to the host. But who would be silly enough to do such a thing, though... :oops:
Don't ask me how, but I once managed to delete the management interface on an esxi, locking myself out.
As I was doing some cable management earlier, the IPMI on that box was also not connected. :confused:
At least it was only a testbox in my basement and not a hosted box somewhere, but yeah, made me facepalm pretty hard. :D
 
  • Like
Reactions: Jon Massey