Dell 8024F, Multicast for Proxmox

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
Now that Proxmox 5 is out and live, I figured I'd build a test cluster on my lab environment to see if it is any easier/better than my previous attempts. I have run into a snag though.

According to the Proxmox documentation, the preferred environment for clustering includes multicast for the cluster communications (which I've never needed for other environments, but hey, whatever floats the boat). I've never configured it before, and it's not working (please be gentle).

The switches are Dell 8024F - older 10GbE, and they supposedly do support multicast (see section 25 of this User Guide). I seem to have a configuration comparable to what I see for the Cisco switches (igmp snooping, igmp querier) plus the Dell-recommended mld configuration:

Code:
ip routing
interface vlan 10 10
  ip address 10.29.6.254 255.255.255.0
  ip mtu 9198
exit
interface vlan 11 1
  ip address 10.14.6.254 255.255.255.0
  ip igmp
  ip igmp version 2
exit
...
ip igmp snooping querier
ipv6 mld snooping querier
ip igmp snooping querier address 10.14.6.254
ipv6 mld snooping querier address FE80::A6BA:DBFF:FE6D:9A15
ip igmp snooping querier vlan 11
ipv6 mld snooping querier vlan 11
ipv6 mld snooping querier vlan 11 address FE80::A6BA:DBFF:FE6D:9A15
ip igmp snooping querier election participate 11
ipv6 mld snooping querier election participate 11
ip multicast
ip igmp
...
interface Te1/0/9
  spanning-tree portfast
  mtu 9216
  switchport access vlan 11
exit
!
interface Te1/0/10
  spanning-tree portfast
  mtu 9216
  switchport access vlan 11
exit
!
interface Te1/0/11
  spanning-tree portfast
  mtu 9216
  switchport access vlan 11
exit
!
interface Te1/0/12
  spanning-tree portfast
  mtu 9216
  switchport access vlan 11
exit
!
omping - which should test multicast connectivity, as far as I can see, is wildly unsuccessful:

Code:
root@c6100b-01:~# omping -c 10000 -i 0.001 -F -q 10.14.6.195 10.14.6.196 10.14.6.197 10.14.6.198
10.14.6.196 : waiting for response msg
10.14.6.197 : waiting for response msg
10.14.6.198 : waiting for response msg
10.14.6.196 : waiting for response msg
10.14.6.197 : waiting for response msg
10.14.6.198 : waiting for response msg
10.14.6.196 : waiting for response msg
10.14.6.197 : waiting for response msg
10.14.6.198 : waiting for response msg
10.14.6.196 : waiting for response msg
10.14.6.197 : waiting for response msg
10.14.6.198 : waiting for response msg
^C
10.14.6.196 : response message never received
10.14.6.197 : response message never received
10.14.6.198 : response message never received
Note that ping works fine, however, so it's not a basic IP problem:

Code:
root@c6100b-01:~# ping 10.14.6.196
PING 10.14.6.196 (10.14.6.196) 56(84) bytes of data.
64 bytes from 10.14.6.196: icmp_seq=1 ttl=64 time=0.139 ms
^C
--- 10.14.6.196 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms
root@c6100b-01:~# ping 10.14.6.197
PING 10.14.6.197 (10.14.6.197) 56(84) bytes of data.
64 bytes from 10.14.6.197: icmp_seq=1 ttl=64 time=0.167 ms
^C
--- 10.14.6.197 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms
root@c6100b-01:~# ping 10.14.6.198
PING 10.14.6.198 (10.14.6.198) 56(84) bytes of data.
64 bytes from 10.14.6.198: icmp_seq=1 ttl=64 time=0.167 ms
^C
--- 10.14.6.198 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms
Does anyone have any suggestions on what I am missing, remembering my infinite n00b level with both Proxmox and IP multicast?
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Multicast yuck !
Some cluster like PowerHA went that way and hated by so many they release the next version again that can do unicast heartbeats.
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
Well it can do unicast, but the setup is convoluted, and I just get the feeling it'll be far more fragile. I don't like fragile, which is why I would have liked to go Hyper-V fully converged (but presently the storage is on a single host and the hypervisor is clustered).
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I had some notes on Cisco multicast config but can't see that's much relevant here...

Preferred to run Hyper-V with S2D ? Pity these is not a native solution that I know of for converged under windows without enterprise license or 3rd party storage appliances.

Anyway a bit off topic, sorry no idea on proxmox multicast for clustering config I just think it's rubbish to use multicast from my experience with it in the other cluster products so far.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Technet Evaluation Center and PowerShell DSC. Yeah - 6 months is annoying, but not a deal killer.
That or action pack or whatever other not for commercial use agreement people can have.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Action Pack only gets you standard not datacenter on server licensing.
Indeed your right, 16 copies of... my memory is failing me or its changed but either way that's what it is now.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
16 cores, to be clear. That's _technically_ only a single Standard host or two VMs.
Oh that's how it's changed, wow that's not as attractive as it used to be. The software list was not very specific on that...
Still for $500 a year the o365 e3 seats are worth it.
 

hminz

New Member
Oct 9, 2013
6
0
1
I am facing similar issues on Ubiquiti Edgeswitch for a similar setup. Is there anyone who managed to get the Proxmox cluster fully working on Ubiquiti switches?
 
Last edited:

Yavuz

New Member
Jul 27, 2017
1
0
1
45
I have this running on those switches. Are you actually running omping on both / multiple machines? Did you test if the cluster could be created?
 

DavidRa

Infrastructure Architect
Aug 3, 2015
329
152
43
Central Coast of NSW
www.pdconsec.net
I have a running cluster on the 8024F's, I'm not sure @Yavuz meant they were running on the Dell or the Ubnt switches?

And I seem to potentially have a working setup too, for example:

Code:
root@10.14.6.195:~# omping -c 10000 -i 0.001 -F -q 10.14.6.195 10.14.6.196 10.14.6.197 10.14.6.198
...
10.14.6.196 :   unicast, xmt/rcv/%loss = 8241/8241/0%, min/avg/max/std-dev = 0.040/0.150/0.473/0.051
10.14.6.196 : multicast, xmt/rcv/%loss = 8241/8241/0%, min/avg/max/std-dev = 0.049/0.170/0.482/0.054
10.14.6.197 :   unicast, xmt/rcv/%loss = 7728/7728/0%, min/avg/max/std-dev = 0.043/0.151/0.465/0.045
10.14.6.197 : multicast, xmt/rcv/%loss = 7728/7728/0%, min/avg/max/std-dev = 0.047/0.165/0.527/0.047
10.14.6.198 :   unicast, xmt/rcv/%loss = 8934/8934/0%, min/avg/max/std-dev = 0.037/0.176/0.628/0.060
10.14.6.198 : multicast, xmt/rcv/%loss = 8934/8934/0%, min/avg/max/std-dev = 0.045/0.190/0.596/0.056
I get similar results on the other three hosts.

Unfortunately I have no idea any more exactly which set of settings finally kicked it into compliance.