Proxmox Network Model with multiple interfaces?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cookiesowns

Active Member
Feb 12, 2016
234
83
28
28
Cross posting from Proxmox Forums...

Hi,

I'm working on setting up a new lab environment for PVE, trying to test Ceph + Proxmox's clusters capability for a future setup.

We have the following nodes:

7x E3 v2, 32GB RAM, with 2x1G LACP bonded down to a multi chassis switch

4x E5 v1, with 8 x 400GB SSD & 2x1TB 7.2K with 2x1GB LACP bonded down to multi chassis switch and 4x10GbE for Ceph network

4x E5 v3 with 2x1GB LACP bonded down, 2x10GbE for ceph, and 1x10GbE for VM traffic.

I was looking at the documentation for the network model, but I wasn't able to fully determine if it's possible to really have granular control without resorting to elaborate configurations.

Really what I want is:

4x E5 v3 nodes have the 2x10GbE be dedicated just for ceph storage traffic, these nodes will strictly be acting as rdb clients, not servers. the 2x1GbE lacp will be used for management and also cluster network traffic. The remaining 10GbE will be on 2 different vlans strictly for VM traffic.

7x E3 v2 nodes will have multiple vlan's trunked down. Let's say, Cluster network, management/proxmox network, VM network, and also a way to access the ceph rdb's ( these will be hosting very light VM's )

4x E5 v1's will be dedicated to ceph, so I think network configuration here is a bit easier. 2x1Gb LACP with Cluster network/management, and 4x10GbE strictly in Ceph

I already have 4 subnets allocated for this.

My main question is, is it necessary to create a bridge for each vlan, or do we really only need the bridge for if VM's will live on that vlan?

EDIT: I've just read that by default VM migration will happen over the defined cluster network, is there a way to change this behavior? Say we want this to happen over the single 10GbE interface for VM Traffic?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Just got off a flight and am doing this a bit tired so forgive the line of thinking.

You are not going to like my advice in this, but given the scale you already have what about swapping to all 10GbE or 40GbE?

You could sell the E3 nodes and get an E5 node or two with faster networking.

Seems like you already have 10GbE and your issue is how to deal with having 1GbE in the mix. After reading that my initial impression is why deal with LACP on ports that are 1/10th the speed.
 

cookiesowns

Active Member
Feb 12, 2016
234
83
28
28
Just got off a flight and am doing this a bit tired so forgive the line of thinking.

You are not going to like my advice in this, but given the scale you already have what about swapping to all 10GbE or 40GbE?

You could sell the E3 nodes and get an E5 node or two with faster networking.

Seems like you already have 10GbE and your issue is how to deal with having 1GbE in the mix. After reading that my initial impression is why deal with LACP on ports that are 1/10th the speed.
Hey Patrick. Yeah, that's what I would have thought too, but the E3 nodes are useful for smaller VM's. I would love to swap to all 10GbE or 40GbE, but unless I do a inter host ring, I don't have the density to do all 10GbE :( Not enough switch ports.

My question was mostly for how proxmox handles traffic routing between the networks, but I've already spun up a good portion of the E3 nodes in a cluster using VLAN trunks on the interfaces, so it seems to be working well..

I'll chime in to see if I hit any bottlenecks in the future.