Proxmox VE 4.3, set an LACP bond

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Hi, i had installed a Proxmox VE 4.3 on a server that have:
- eth0 ==> 1 Gigabits RJ45 connected to a LB4M switch
- eth1 ==> 1 Gigabits RJ45 not connected
- eth2 ==> 10 Gigabits SFP+ connected to a Gnodal GS4008 switch
- eth3 ==> 10 Gigabits SFP+ connected to a Gnodal GS4008 switch

my /etc/network/interfaces is now configured like this :

Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto vmbr0
iface vmbr0 inet static
        address  172.17.1.207
        netmask  255.255.0.0
        gateway  172.17.1.254
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
I'm trying to configure an LACP bond with eth2 and eth3.

I've done the LACP configuration on the GS4008 and used the PVE gui for create a Linux Bond with slaves "eth2 eth3".
When i restart the server, it seems to works because when a do things like "ifconfig bond0" i see UP but the server become unreacheable from the local network (also using the eth0 through the vmbr0 bridge)

typically i tried this (with some variants) !

Code:
auto bond0
iface bond0 inet static
   address  172.17.1.206
   netmask  255.255.0.0
   slaves eth2 eth3
   bond_miimon 100
   bond_mode 802.3ad
for each tries i restart the server but i lots all network access.
I have to discard my /etc/network/interfaces configuration and reboot again the server in order to recover the network.

So, how do you achieve such configurations inside Proxmox ?
Do you think i should give a try to OVS bridge and OVS bond ?

Thanks in advance for any advise.

Séb.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
Just for saying that after reseting the network config on PVE and the channel-group on the switch and redo the entire settings, all worked like a charm.

I used the GUI for configuring the Linux Bond and the Linux Bridge on the PVE side that give the following /etc/network/interfaces

Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
        slaves eth2 eth3
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer3+4

auto vmbr0
iface vmbr0 inet static
        address  172.17.1.207
        netmask  255.255.0.0
        gateway  172.17.1.254
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
On the switch, i did a configuration like it:

Code:
   config
   set port-channel enabled

   channel-protocol lacp
   
   interface port-channel 2
   shutdown
   mtu 9216
   port-channel max-ports 2
   no shutdown
   !

   interface ethernet 0/26
   no shutdown
   channel-group 2 mode passive
   !

   interface ethernet 0/28
   no shutdown
   channel-group 2 mode passive
   !
   
   end
   end

   show etherchannel summary
   show interfaces port-channel 2   
   show interfaces etherchannel
   write startup-config
I don't know what was wrong in my first attempts, it's frustrating...
 
  • Like
Reactions: Jeggs101

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
I created the usb key 2 days before the 4.4 availability.
I did the install with it because i'm too lazy to download again the new version.

But i setted the apt source list for use the "no subscription" repository and ugraded the PVE to 4.4 from the GUI.
I did that before to create my first vm. at this time i install a fresh centos 7 (the 1611) for testing purpose.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
an off-topic question.

For expose storage from FreeNas to my PVE host, do you prefer
- an NFS mount
- an iScsi mount

It's simpler to do nfs so i started with this, but i'm not sure if it's the best choice.
 

nephri

Active Member
Sep 23, 2015
541
106
43
46
Paris, France
FreeNas 9.10 use now the ctld iscsi provider instead of istgt.
I did'nt be able to configure correctly the ZFS over iSCSI in ProxMox (but PVE is able to create the zvol but the vm creation fail because pve try something using istgt).
At this time, i did'nt find a support of ctld in ProxMox "ZFS over iSCSI".
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Oh, ok ... so this will not work.

I did some work on an ZFS over iSCSI - Plugin that works with SCST instead of Iet, not really hard to do and could be done for other iscsi-targets, too. But for some reason i wasn't able to live-migrate a VM away from this storage (everything else worked, also live-migration to the volume), what was maybe a bad drive and/or something strange in kvm drive-mirror. Didn't care too much as it was more a 'prrof-of-concept' and not really needed but sooner or later i will give it another try.

Since i did this there came a new (undocumented) possibility to add custom storage plugins / implementations to proxmox that don't break with updates:

[pve-devel] [PATCH] Add support for custom storage plugins

With a bit perl, knowledge of the target itself (or an freenas-api - no idea if there is any as i don't use it) the zfs-over-iscsi implementation from proxmox itself and this template ( GitHub - mityarzn/pve-storage-custom-mpnetapp ) it should be possible to get zfs-over-iscsi running with freenas.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Cool, if you go for it i can help you a bit by pointing you to the right files / maybe give yo a skeleton where only the target-specific things need to be implemented/changed for freeNAS.

Guess there are a few people that would be happy to use zfs-over-iscsi with it ;)
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Hi, in the afternoon i got my scst-plugin refactored to use the custom storage plugin structure and also running for basic op`s.
I changed it so that it inherits the zfsplugin package from proxmox, what left very little code to implement for the zfs-part. Still one ugly workaround in configuration.



Will do some cleanup / have another look on the whole thing tomorrow and would just copy this, rename files and packages to FreeNAS and give you the whole thing.

So, only the LUN commands for FreeNAS need to be implemented / changed.

I can also setup a FreeNAS in a KVM and help you debug and/or testing.