New Proxmox server - need recommendation on setting up network

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jay Quin

New Member
Jul 26, 2018
2
1
3
Hello, I'm working on setting up a new Proxmox server. I've been working on designing the network and I was hoping to get a review from someone with more experience. The server has six physical network ports. I'm planning on using Ansible to maintain the VMs/LXCs and OVS configuration. My goals are:
  • Fast as possible connection between VMs/LXCs.
  • Simplest configuration to maintain.
  • Fastest connection to my PC for ZFS send/receive backups
  • Secure connections between the user facing VMs/LXCs and the ones used for management (i.e. Zabbix).
  • Prefer no traffic or as little as possible that travels between VMs/LXCs leave the PVE node. (i.e. different vlans that have to go through main router)
I have a pfSense router running on a mini PC that is way overpowered. I'm planning on running HAProxy, Zabbix Proxy and Syslog-ng as a forwarder on it so no client devices besides my management PC will connect directly to the VMs/LXCs on the management VLAN.

A graphic of my network plan is here. I know the firewall between OVS bridges is probably overkill. I could easily put both VLANs on one bridge and use a few IPTable rules or OpenFlow for traffic between VLANs, but I figure I can use the Firewall container as an SSH bastion host also. I also don't really need a separate physical bonded connection to the management VLAN but I figured why not I have the ports available.

Below is my the /etc/network/interfaces. It includes a cable directly connected to my management PC so I can use jumbo frames. Please let me know if there are any mistakes or anything else I should consider. Also since I simplified the network into two bridges, one for each vlan, I know I could get away with just using Linux bridges. What is the performance (cpu/memory usage) of OVS over Linux bridges?

Thanks for the help.

Code:
# Loopback interface
auto lo
iface lo inet loopback

# Not Used
auto eth0
iface eth0 inet manual

# Direct connection between backup PC and Host (jumbo frames)
# and backup host connection if issue with Open vSwitch
auto eth1
iface eth1 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.1
  mtu 8996

# Bond eth2 and eth3 together for vmbr50
allow-vmbr50 bond50
iface bond50 inet manual
  ovs_bridge vmbr50
  ovs_type OVSBond
  ovs_bonds eth2 eth3
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

# Main ovs_bridge
auto vmbr50
allow-ovs vmbr50
iface vmbr50 inet manual
  ovs_type OVSBridge
  ovs_ports bond50 vlan50

# Virtual interface for VLAN 50
allow-vmbr50 vlan50
iface vlan50 inet static
  ovs_type OVSINTPort
  ovs_bridge vmbr50
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 172.20.50.5
  netmask 255.255.255.0
  gateway 172.20.50.1

# Bond eth4 and eth5 together for vmbr10
allow-vmbr10 bond10
iface bond10 inet manual
  ovs_bridge vmbr10
  ovs_type OVSBond
  ovs_bonds eth4 eth5
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

# Management ovs_bridge
auto vmbr10
allow-ovs vmbr10
iface vmbr10 inet manual
  ovs_type OVSBridge
  ovs_ports bond10 vlan10

# Virtual interface for VLAN 10
allow-vmbr10 vlan10
iface vlan10 inet static
  ovs_type OVSINTPort
  ovs_bridge vmbr10
  ovs_options tag=10
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 172.20.10.5
  netmask 255.255.255.0
  gateway 172.20.10.1
 
  • Like
Reactions: gigatexal

MikeWebb

Member
Jan 28, 2018
92
29
18
54
How did it work out for you? I should really look at using OVS built into pve.