Proxmox network conundrum, help needed and appreciated

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vl1969

Active Member
Feb 5, 2014
634
76
28
Hello everyone.
I have asked this on proxmox forum and got some info. Haven't time to test it out yet. But as I wait to get into it I thought I ask here. As a more general forum I might get more info to work with.

So here it gose.
I have a Proxmox home server based in supermicro x8dt3-ln4f mb.
It has ipmi and 4 nic.
2. Xenon x5670.
64 gb ram.
Proxmox 5.3-8

Now my question is network related
What would be the best config for fast reliable network setup here.
Now this is home setup. All machines are connected to a netgear 48 port switch. It is a managed switch but I use it more like set it and forget it setup. As all my machines are linux and managing app is windows only.
The whole networks is
Cable modem connected to pfsense box which is my router and firewall and all other things I may want.
That box is connected to the switch.
All other clients are going into that switch as well.

The Proxmox note as it sits now configured like:
All nics are plugged in into switch.
Nic1 is set as port to vmbr0 a default setup. Other nics I have been experimenting with bonding and connecting it to vmbr1.
But I do not get ip on it or through it.
FYI: all connections are set as dhcp and I do the ip reservation on my pfsense box. It makes easier to navigate by hostname as the ip and names are assigned and reserved on router. I always get the same ip on the same box.
So what is the best option for me here?
How do I get best network performance?

Thanks.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
For the best performance, forget bonding and use 10G. I managed to get it working on Linux (not Proxmox though) once. But without a lot of clients, the traffic doesn't spread well across the ports. It's not worth the hassle on a home network.

I don't remember the setup, but I just followed some online documentation.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
You have a single switch, a single router and a single VM host. Direct cabled connections between them are unlikely to fail (they can, but very low probability). You don't really have any opportunity to increase overall reliability using Bonding, etc.

As @ttabbal notes, getting higher performance through Link Aggregation or ECMP is very difficult in configurations with a small number of hosts and clients. Not worth the hassle and benefits are limited.

In short - you are chasing a ghost - an intriguing idea that is unlikely to bear fruit.

+1 to @ttabbal - if you want higher performance invest in some 10Gbe NICs and a switch to land them on.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Ok thanks everyone.
Let me tell you furst thing .I am not chasing a ghosts
Just want to use all resources I have in an eficient manner.
10g is not an option. Like others point out it only one server and couple of pcs. No 10g in PCs as one of them is a NUC one is a laptop. And one is a custom built PC. So the only 10g candidates are server and one PC. Hardly worth it.

But since I do have 4 nics I think why not actially use them in the best possible way. Hence the questions.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Chasing ghosts wasn't meant as an insult - its just that too many people go after bonding or aggregation, etc, without the requisite scale (small number of hosts/clients) or resources (bonding to a single switch/router rather than a redundancy group). Doing that is a LOT of work for almost no benefit.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Also bonding to a cheap bet gear switch probably not much point either.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Also bonding to a cheap bet gear switch probably not much point either.
Not too cheap. It is enterprise grade 200$ switch. I got it from my work when we were closing down.
Not fancy but not bad.