Dell 8024F, Multicast for Proxmox

Discussion in 'Networking' started by DavidRa, Jul 6, 2017.

  1. DavidRa

    DavidRa Infrastructure Architect

    Joined:
    Aug 3, 2015
    Messages:
    253
    Likes Received:
    110
    Now that Proxmox 5 is out and live, I figured I'd build a test cluster on my lab environment to see if it is any easier/better than my previous attempts. I have run into a snag though.

    According to the Proxmox documentation, the preferred environment for clustering includes multicast for the cluster communications (which I've never needed for other environments, but hey, whatever floats the boat). I've never configured it before, and it's not working (please be gentle).

    The switches are Dell 8024F - older 10GbE, and they supposedly do support multicast (see section 25 of this User Guide). I seem to have a configuration comparable to what I see for the Cisco switches (igmp snooping, igmp querier) plus the Dell-recommended mld configuration:

    Code:
    ip routing
    interface vlan 10 10
      ip address 10.29.6.254 255.255.255.0
      ip mtu 9198
    exit
    interface vlan 11 1
      ip address 10.14.6.254 255.255.255.0
      ip igmp
      ip igmp version 2
    exit
    ...
    ip igmp snooping querier
    ipv6 mld snooping querier
    ip igmp snooping querier address 10.14.6.254
    ipv6 mld snooping querier address FE80::A6BA:DBFF:FE6D:9A15
    ip igmp snooping querier vlan 11
    ipv6 mld snooping querier vlan 11
    ipv6 mld snooping querier vlan 11 address FE80::A6BA:DBFF:FE6D:9A15
    ip igmp snooping querier election participate 11
    ipv6 mld snooping querier election participate 11
    ip multicast
    ip igmp
    ...
    interface Te1/0/9
      spanning-tree portfast
      mtu 9216
      switchport access vlan 11
    exit
    !
    interface Te1/0/10
      spanning-tree portfast
      mtu 9216
      switchport access vlan 11
    exit
    !
    interface Te1/0/11
      spanning-tree portfast
      mtu 9216
      switchport access vlan 11
    exit
    !
    interface Te1/0/12
      spanning-tree portfast
      mtu 9216
      switchport access vlan 11
    exit
    !
    omping - which should test multicast connectivity, as far as I can see, is wildly unsuccessful:

    Code:
    root@c6100b-01:~# omping -c 10000 -i 0.001 -F -q 10.14.6.195 10.14.6.196 10.14.6.197 10.14.6.198
    10.14.6.196 : waiting for response msg
    10.14.6.197 : waiting for response msg
    10.14.6.198 : waiting for response msg
    10.14.6.196 : waiting for response msg
    10.14.6.197 : waiting for response msg
    10.14.6.198 : waiting for response msg
    10.14.6.196 : waiting for response msg
    10.14.6.197 : waiting for response msg
    10.14.6.198 : waiting for response msg
    10.14.6.196 : waiting for response msg
    10.14.6.197 : waiting for response msg
    10.14.6.198 : waiting for response msg
    ^C
    10.14.6.196 : response message never received
    10.14.6.197 : response message never received
    10.14.6.198 : response message never received
    Note that ping works fine, however, so it's not a basic IP problem:

    Code:
    root@c6100b-01:~# ping 10.14.6.196
    PING 10.14.6.196 (10.14.6.196) 56(84) bytes of data.
    64 bytes from 10.14.6.196: icmp_seq=1 ttl=64 time=0.139 ms
    ^C
    --- 10.14.6.196 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms
    root@c6100b-01:~# ping 10.14.6.197
    PING 10.14.6.197 (10.14.6.197) 56(84) bytes of data.
    64 bytes from 10.14.6.197: icmp_seq=1 ttl=64 time=0.167 ms
    ^C
    --- 10.14.6.197 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms
    root@c6100b-01:~# ping 10.14.6.198
    PING 10.14.6.198 (10.14.6.198) 56(84) bytes of data.
    64 bytes from 10.14.6.198: icmp_seq=1 ttl=64 time=0.167 ms
    ^C
    --- 10.14.6.198 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms
    Does anyone have any suggestions on what I am missing, remembering my infinite n00b level with both Proxmox and IP multicast?
     
    #1
    Last edited: Jul 6, 2017
  2. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,855
    Likes Received:
    427
    Multicast yuck !
    Some cluster like PowerHA went that way and hated by so many they release the next version again that can do unicast heartbeats.
     
    #2
  3. DavidRa

    DavidRa Infrastructure Architect

    Joined:
    Aug 3, 2015
    Messages:
    253
    Likes Received:
    110
    Well it can do unicast, but the setup is convoluted, and I just get the feeling it'll be far more fragile. I don't like fragile, which is why I would have liked to go Hyper-V fully converged (but presently the storage is on a single host and the hypervisor is clustered).
     
    #3
  4. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,855
    Likes Received:
    427
    I had some notes on Cisco multicast config but can't see that's much relevant here...

    Preferred to run Hyper-V with S2D ? Pity these is not a native solution that I know of for converged under windows without enterprise license or 3rd party storage appliances.

    Anyway a bit off topic, sorry no idea on proxmox multicast for clustering config I just think it's rubbish to use multicast from my experience with it in the other cluster products so far.
     
    #4
  5. DavidRa

    DavidRa Infrastructure Architect

    Joined:
    Aug 3, 2015
    Messages:
    253
    Likes Received:
    110
    Technet Evaluation Center and PowerShell DSC. Yeah - 6 months is annoying, but not a deal killer.
     
    #5
  6. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,855
    Likes Received:
    427
    That or action pack or whatever other not for commercial use agreement people can have.
     
    #6
  7. markarr

    markarr Active Member

    Joined:
    Oct 31, 2013
    Messages:
    391
    Likes Received:
    101
    Action Pack only gets you standard not datacenter on server licensing.
     
    #7
  8. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,855
    Likes Received:
    427
    Indeed your right, 16 copies of... my memory is failing me or its changed but either way that's what it is now.
     
    #8
  9. DavidRa

    DavidRa Infrastructure Architect

    Joined:
    Aug 3, 2015
    Messages:
    253
    Likes Received:
    110
    16 cores, to be clear. That's _technically_ only a single Standard host or two VMs.
     
    #9
  10. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,855
    Likes Received:
    427
    Oh that's how it's changed, wow that's not as attractive as it used to be. The software list was not very specific on that...
    Still for $500 a year the o365 e3 seats are worth it.
     
    #10
  11. hminz

    hminz New Member

    Joined:
    Oct 9, 2013
    Messages:
    6
    Likes Received:
    0
    I am facing similar issues on Ubiquiti Edgeswitch for a similar setup. Is there anyone who managed to get the Proxmox cluster fully working on Ubiquiti switches?
     
    #11
    Last edited: Jul 25, 2017
  12. Yavuz

    Yavuz New Member

    Joined:
    Jul 27, 2017
    Messages:
    1
    Likes Received:
    0
    I have this running on those switches. Are you actually running omping on both / multiple machines? Did you test if the cluster could be created?
     
    #12
  13. DavidRa

    DavidRa Infrastructure Architect

    Joined:
    Aug 3, 2015
    Messages:
    253
    Likes Received:
    110
    I have a running cluster on the 8024F's, I'm not sure @Yavuz meant they were running on the Dell or the Ubnt switches?

    And I seem to potentially have a working setup too, for example:

    Code:
    root@10.14.6.195:~# omping -c 10000 -i 0.001 -F -q 10.14.6.195 10.14.6.196 10.14.6.197 10.14.6.198
    ...
    10.14.6.196 :   unicast, xmt/rcv/%loss = 8241/8241/0%, min/avg/max/std-dev = 0.040/0.150/0.473/0.051
    10.14.6.196 : multicast, xmt/rcv/%loss = 8241/8241/0%, min/avg/max/std-dev = 0.049/0.170/0.482/0.054
    10.14.6.197 :   unicast, xmt/rcv/%loss = 7728/7728/0%, min/avg/max/std-dev = 0.043/0.151/0.465/0.045
    10.14.6.197 : multicast, xmt/rcv/%loss = 7728/7728/0%, min/avg/max/std-dev = 0.047/0.165/0.527/0.047
    10.14.6.198 :   unicast, xmt/rcv/%loss = 8934/8934/0%, min/avg/max/std-dev = 0.037/0.176/0.628/0.060
    10.14.6.198 : multicast, xmt/rcv/%loss = 8934/8934/0%, min/avg/max/std-dev = 0.045/0.190/0.596/0.056
    I get similar results on the other three hosts.

    Unfortunately I have no idea any more exactly which set of settings finally kicked it into compliance.
     
    #13
Similar Threads: Dell 8024F
Forum Title Date
Networking Dell PowerConnect 8024F 24-Port 10GB Jul 31, 2017
Networking Dell 8024f or Quanta LB6M? Jun 5, 2016
Networking Dell 8024F noise level Oct 9, 2015
Networking Flash repair on Dell PowerConnect 8024F Aug 14, 2013
Networking Dell S4810p firmware files Oct 27, 2019

Share This Page