Drag to reposition cover

Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pr09

New Member
Jun 21, 2020
10
2
3
this was always an issue for me on the icx6xxx (v8030) series, never tried on the 7 series - I would at least update to the recommended stable from ruckus (8095g, it's what's on the guide in this thread), if that still doesn't fix it you can try the absolute latest (09.0.10c)
8095g seems to fix this bug and I haven't found other breakage yet.

(I tried 09 series a while back, ran into weirdness with multicast & host-to-host neighbor discovery, and it didn't *seem* to be fixed by going back to 8095, but 8090 did. But now 8095 is working. I don't know anymore...)
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,919
3,444
113
34
fohdeesha.com
8095g seems to fix this bug and I haven't found other breakage yet.

(I tried 09 series a while back, ran into weirdness with multicast & host-to-host neighbor discovery, and it didn't *seem* to be fixed by going back to 8095, but 8090 did. But now 8095 is working. I don't know anymore...)
like any new train 8095 was rocky for a while, but they finally hit "recommended" with F or G rev if I remember right. It's what I've been running in production in a few different DCs and haven't found any showstoppers yet
 

Vesalius

Active Member
Nov 25, 2019
262
204
43
8095g seems to fix this bug and I haven't found other breakage yet.

(I tried 09 series a while back, ran into weirdness with multicast & host-to-host neighbor discovery, and it didn't *seem* to be fixed by going back to 8095, but 8090 did. But now 8095 is working. I don't know anymore...)
Multicast discovery is weird so far, at least with my combination of HomeKit, and Proxmox VM’s
 

LodeRunner

Active Member
Apr 27, 2019
554
236
43
Anyone has any ideas about this question?
FAQ document would seem to indicate that all ports are POE+, so as long as you don't cross the wattage limit (390 for 24, 780 for 48) then it should be fine. Images of the switch don't appear to have any special markings on the ports; for example, my 7450 has the first 8 ports specially marked to indicate they do PoH/PoE++ as apposed to the other 40 which are PoE+ only.
 

seatrope

Member
Oct 5, 2018
35
12
8
Maine
www.ychng.com
i was trying to get my Sonos devices working with controller and Sonos devices in different VLANs. this is on a ICX-6610 with interVLAN routing done on the switch, and pfsense for FW only. There are a bunch of guide floating out there which are not specific for the Brocade.

at first I tried using a Rpi with interfaces in both VLAN, using udp-broadcast-relay:

this worked, but I wanted a native solution on the switch. After watching some Terry Henry videos, the solution was so ridiculously simple that I was kicking myself. I’m sure most here know this, but sharing for other noobs like myself

1. At the config level: router pim
2. For each ve that you need multicasts to be bridged, enter each vif and: ip pim

Sonos now works perfectly across VLANs. Apply ACLS as needed for security.

Terry Henry video: https://www.google.com/url?sa=t&rct...=PNsIbdXqHlI&usg=AOvVaw3W1AYaZ1JbclyuZ0jqjiOV
 

Vesalius

Active Member
Nov 25, 2019
262
204
43
@seatrope glad that relatively simple fix worked for you. Let us know if you run into any issues going forward. Multicast at home on the ruckus hasn’t been the easiest to figure out for a native solution. Looks like pim might work well for those at layer 3 with established intervlan routing set-up.

I might give the new Multicast VLAN Registration (MVR) module a try on the 0.9 series firmware.
 
Last edited:
  • Like
Reactions: seatrope

seatrope

Member
Oct 5, 2018
35
12
8
Maine
www.ychng.com
Additional question for the gurus here.

I'm planning to do HA failover with the raspberry pi's that run piHole/DHCP using keepalived, which uses VRRP for failover between a VIP and 2 actual IPs for each RPi pair.

Is the VRRP implementation as detailed here:
GitHub - matayto/pihole-keepalived: Simple failover configurations for a multi-pihole infrastructure

independent of the VRRP stuff I see in the Brocade manual for our switches? I'm guessing that VRRP is more for router failover and has nothing to do with failover of host devices?

Seems like VRRP is handled via multicast through IP type 112:
Using Keepalived for managing simple failover in clusters | Enable Sysadmin (redhat.com)

So I assume from this, all I need to ensure is that the VRRP packets can get via multicast from one Rpi to the other pair of the HA, which shouldn't be an issue given they will both be in the same subnet (or PIM-Dense will take care of it if I perversely decide to have the pair in different subnets).

Does that sound right?
 

kevindd992002

Member
Oct 4, 2021
136
9
18
FAQ document would seem to indicate that all ports are POE+, so as long as you don't cross the wattage limit (390 for 24, 780 for 48) then it should be fine. Images of the switch don't appear to have any special markings on the ports; for example, my 7450 has the first 8 ports specially marked to indicate they do PoH/PoE++ as apposed to the other 40 which are PoE+ only.
That's what I thought. It's just that the FAQ is saying 24 ports of PoE+ without the external PSU. I guess it just means 24 ports of full 30W PoE+.
 

kpfleming

Active Member
Dec 28, 2021
449
230
43
Pelham NY USA
HA failover with the raspberry pi's that run piHole/DHCP using keepalived
Please note that DHCP is not a normal application protocol which can be handled by VRRP or similar techniques.

Clients locate the DHCP server using broadcast messages, which don't even have normal IP addresses in them (they can't since the client doesn't have an address at that point). Once the DHCP transaction has been completed the client will use unicast messages for renewal/release (unless that fails, in which case it will go back to broadcast). if the IP address of the DHCP server is handled via VRRP, the secondary/failover DHCP server will need to have all of the lease state from the primary one in order to be able to respond to these messages properly. It also needs that state in order to ensure that it doesn't hand out duplicate addresses.

HA DHCP is really quite off-topic for this thread (although not this forum!) but it certainly can be done. I've got ISC Kea DHCP running on two boxes on my LAN, using ICX 7150s for traffic, and it works really well. The ICXs are configured with two helper addresses for forwarding DHCP broadcast traffic to both Kea boxes in parallel, and then Kea handles the HA aspects itself.
 

LodeRunner

Active Member
Apr 27, 2019
554
236
43
Additional question for the gurus here.

I'm planning to do HA failover with the raspberry pi's that run piHole/DHCP using keepalived, which uses VRRP for failover between a VIP and 2 actual IPs for each RPi pair.

Is the VRRP implementation as detailed here:
GitHub - matayto/pihole-keepalived: Simple failover configurations for a multi-pihole infrastructure

independent of the VRRP stuff I see in the Brocade manual for our switches? I'm guessing that VRRP is more for router failover and has nothing to do with failover of host devices?

Seems like VRRP is handled via multicast through IP type 112:
Using Keepalived for managing simple failover in clusters | Enable Sysadmin (redhat.com)

So I assume from this, all I need to ensure is that the VRRP packets can get via multicast from one Rpi to the other pair of the HA, which shouldn't be an issue given they will both be in the same subnet (or PIM-Dense will take care of it if I perversely decide to have the pair in different subnets).

Does that sound right?
This sounds like complexity for the sake of complexity. DNS is already redundant without VRRP, and ISC DHCP has its own failover support (A Basic Guide to Configuring DHCP Failover). Just setup two PiHoles for DNS only, then separately configure ISC DHCP. Use something like Gravity Sync to keep the PiHole DNS configuration synced between the two.

Then you have two unused rPis to do other things with. Or if you have a virtualization platform or Docker, skip the hardware entirely then you have 4 rPis for other projects.
 

seatrope

Member
Oct 5, 2018
35
12
8
Maine
www.ychng.com
This sounds like complexity for the sake of complexity. DNS is already redundant without VRRP, and ISC DHCP has its own failover support (A Basic Guide to Configuring DHCP Failover). Just setup two PiHoles for DNS only, then separately configure ISC DHCP. Use something like Gravity Sync to keep the PiHole DNS configuration synced between the two.

Then you have two unused rPis to do other things with. Or if you have a virtualization platform or Docker, skip the hardware entirely then you have 4 rPis for other projects.
@LodeRunner I have pihole and DHCP (dnsmasq) on the same pi. So only 2pi not 4pi lol.

I did look into using ISC DHCP. The downside of ISC DHCP is that pihole does not resolve hostnames automatically if you use ISC DHCP (is what I read). If you use the built in dnsmasq for DHCP it will. I know, first world problems...
 

LodeRunner

Active Member
Apr 27, 2019
554
236
43
@LodeRunner I have pihole and DHCP (dnsmasq) on the same pi. So only 2pi not 4pi lol.

I did look into using ISC DHCP. The downside of ISC DHCP is that pihole does not resolve hostnames automatically if you use ISC DHCP (is what I read). If you use the built in dnsmasq for DHCP it will. I know, first world problems...
Sorry the way your first post was phrased I misread it as you having more than 1 pair.

As far as DNS client failover time, it’s short. Like 2 seconds for Windows? I practically never notice when rebooting my primary DNS (which is Active Directory that forwards to a pair of Pihole VMs).

Getting Dynamic DNS to work from ISC DHCP into Pihole appears to be an issue, yes. I've seen some people solve it by using some sort of conditional rules in the Pihole. And I guess a Bind server holding the local zone? The Pihole forwards local requests to the Bind server.

The Bind server and DDNS in ISC-DHCP obviously is extra complexity, but I think less so than configuring keepalived or other HA software solution.
 

jasonwc

Member
Dec 31, 2018
49
18
8
A user on Reddit posted an interesting problem with his ICX6610-24 (non PoE). He said his switch idles at 180-200W and when under load, can hit 400W. I told him this makes no sense given that the spec sheet indicates it only requires a single 250W power supply. The specs say the second power supply is optional, for redundancy. Also, this thread indicates it should idle at 80W or so.

I asked him his OS version and power supply revision. He is on 08.0.10c. His output from “show chassis” is really strange:

Power supply 1 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 1 Fan Air Flow Direction: Front to Back
Power supply 2 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 2 Fan Air Flow Direction: Front to Back

Fan 1 ok, speed (auto): [[1]]<->2
Fan 2 ok, speed (auto): [[1]]<->2

So, all zeroes model number, FFFF serial number, and blank firmware version. He mentioned he has three of these power supplies, marked “S5/S2.”

I then suggested that he start fresh using the guide to upgrade his bootloader and firmware to the latest versions.

He indicated the switch shut down after upgrading the boot loader and now appears bricked.

Based on this information, does the switch appear to be genuine? Why would the power supply show a blank model number with no revision?
 
Last edited:

kpfleming

Active Member
Dec 28, 2021
449
230
43
Pelham NY USA
When a device roams from one AP to the other, the switch updates its MAC table (show mac-address vlan 10) with the new port, but the ND table still binds the IP/mac combo to the old port (show ipv6 neighbor ve 10). This makes the IP address unreachable.
For what it's worth, I just experienced a very similar symptom on 09.0.10c firmware. I shut down a machine, moved it a different location temporarily, and connected it via another switch that is connected to the ICX stack. When the machine came up, it was reachable via IPv4 but not IPv6. After I moved it back to its original port on the ICX, it became reachable over both protocols.

I didn't leave it in that condition very long, so I don't know how long it would have taken for the NDP cache to time out.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,919
3,444
113
34
fohdeesha.com
A user on Reddit posted an interesting problem with his ICX6610-24 (non PoE). He said his switch idles at 180-200W and when under load, can hit 400W. I told him this makes no sense given that the spec sheet indicates it only requires a single 250W power supply. The specs say the second power supply is optional, for redundancy. Also, this thread indicates it should idle at 80W or so.

I asked him his OS version and power supply revision. He is on 08.0.10c. His output from “show chassis” is really strange:

Power supply 1 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 1 Fan Air Flow Direction: Front to Back
Power supply 2 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 2 Fan Air Flow Direction: Front to Back

Fan 1 ok, speed (auto): [[1]]<->2
Fan 2 ok, speed (auto): [[1]]<->2

So, all zeroes model number, FFFF serial number, and blank firmware version. He mentioned he has three of these power supplies, marked “S5/S2.”

I then suggested that he start fresh using the guide to upgrade his bootloader and firmware to the latest versions.

He indicated the switch shut down after upgrading the boot loader and now appears bricked.

Based on this information, does the switch appear to be genuine? Why would the power supply show a blank model number with no revision?
these switches were never counterfeited (that I'm aware of), so it's certainly genuine, but it sounds like it had a pretty serious fault to begin with. if a 250w PSU is pulling 400w, I'm assuming the PSU(s) themselves had a pretty bad fault, which could also explain why their manuf data EEPROM couldn't be read (reporting all FFF)
 
  • Like
Reactions: zunder1990

fohdeesha

Kaini Industries
Nov 20, 2016
2,919
3,444
113
34
fohdeesha.com
A user on Reddit posted an interesting problem with his ICX6610-24 (non PoE). He said his switch idles at 180-200W and when under load, can hit 400W. I told him this makes no sense given that the spec sheet indicates it only requires a single 250W power supply. The specs say the second power supply is optional, for redundancy. Also, this thread indicates it should idle at 80W or so.

I asked him his OS version and power supply revision. He is on 08.0.10c. His output from “show chassis” is really strange:

Power supply 1 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 1 Fan Air Flow Direction: Front to Back
Power supply 2 (AC - Regular) present, status ok
Model Number: 00-0000000-00
Serial Number: FFFF
Firmware Ver:
Power supply 2 Fan Air Flow Direction: Front to Back

Fan 1 ok, speed (auto): [[1]]<->2
Fan 2 ok, speed (auto): [[1]]<->2

So, all zeroes model number, FFFF serial number, and blank firmware version. He mentioned he has three of these power supplies, marked “S5/S2.”

I then suggested that he start fresh using the guide to upgrade his bootloader and firmware to the latest versions.

He indicated the switch shut down after upgrading the boot loader and now appears bricked.

Based on this information, does the switch appear to be genuine? Why would the power supply show a blank model number with no revision?

also, if his switches were really pulling 200w or above like he states, the fans would NOT be on fan speed 1 still as his output indicates - that is an insane amount of heat for these fans to exhaust out of 1ru, they would have ramped up to fan speed 2. How is he measuring power draw?