Drag to reposition cover

Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

klui

Well-Known Member
Feb 3, 2019
824
453
63
I was simply trying to find out what licenses are required for what switches. This might seem like a simple and stupid question for someone with experience, but for someone that is inexperienced with a majority of Brocade equipment it is a legitimate question.
See the first post where @fohdeesha nicely linked to each switch's datasheet? They detail the types of licenses each model needs to activate its full capability. The same is true for all other vendors, not just Brocade.
 
  • Like
Reactions: Vesalius

JoJoMan

New Member
Jul 19, 2021
10
6
3
seems like some kind of negotiation error, did you update the poe firmware as well? if I recall the highest power class of 30w requires lldp to negotiate fully, can you try running the following then unplug/replug stuff


Code:
enable
conf t
lldp run
This fixed it :D

seems like the non-poe+ ports work as well :)

thanks a bunch
 
  • Like
Reactions: fohdeesha

JoJoMan

New Member
Jul 19, 2021
10
6
3
This fixed it :D

seems like the non-poe+ ports work as well :)

thanks a bunch
I spoke too soon, 1 of the pis works fine, the others seem to get some power (lights turn on on the pi) but the ethernet lights dont turn on, and it seems like the switch doesn't notice that it should be providing power.

here, port 27 is the one with the working pi, port 34 has a pi with some power(power lights turn on) but no ethernet lights:
Code:
 1/1/27    On     On          4000      15400  802.3af  n/a         3  n/a
 1/1/28    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/29    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/30    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/31    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/32    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/33    Off    Off            0          0  n/a      n/a         3  n/a
 1/1/34    On     Off            0          0  n/a      n/a         3  n/a
 

ZFSZealot

New Member
Aug 16, 2021
26
6
3
I have not tried physically reversing the fans...but...I do know that the fan tray that connects (the two "tandem" fans each), to the switch has resistors in the wire/connector (on the fan side). That's how it knows if you put in a fan tray that's "front to back" or "back to front". You'll have to hack that as well.
Sorry about dredging this up from 3 years ago but I'd like to clarify this point. If you were to buy a 6610 and you wanted to reverse the direction the fans all blow, would it be adequate to just take the fans out of the fan tray(s) and PSU(s) and just turn then physically around? Does it really matter if the switch thinks they're blowing in the original direction? (i.e. is hacking the resistor really necessary?) I get that the control plane's still going to report that everything's "back to front" if I switch the fans to "front to back" without a resistor hack.

Reason I'm asking is it seems that you generally see two different variants of the 6610's. The "E" version which seems to usually come with PoE and just rack ears, and the "I" version which doesn't seem to usually come with PoE but will occasionally come with the 4 post rail kit. It kind of makes sense - a 6610-48-I seems like it would go in a four post rack to connect servers as a ToR use case, and a 6610-48P-E seems like it would be something more used in a comm rack with patch panels for WiFi AP's or other PoE devices.

I don't really need PoE as I have a midspans for that, but would really like the 4 post kit. The hitch is that what best fits my needs (4 post rails, no PoE) is usually the "I" version and I'd like to mount it with the ports forward to patch to the midspan and patch panels to building wiring... which needs the airflow to be in the opposite direction than the "I" version has.
 

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
So, as a pretty invested Brocade enthusiast personally and as a network admin who has quite a few in production professionally, I have to once again share my disappointment with the design of the 7450s.

I've been with my current employer for 2 years. In 2016, they purchased approximately 400 7450s to cover nearly all of my sites. Older firmware on the 8070 train were awful, but that aside, there are fundamental hardware issues that plagues this line of switches that I cannot ignore.

Unlike alot of Cisco or HP/Aruba offerings, Brocade has traditionally taken off-the-shelf componentry and integrated them into their products, or they have built their products in a modular fashion. As an example, stacking cables are merely QSFP+ cables, the POE versions of the switches share the same motherboard as the NON-POE versions and have a separate daughter board, the VDX line has 10gig copper versions that report each port as an Aquantia RJ45 SFP+ transceiver.

In terms of the 7450s, I've experienced two common failures that are relatively difficult to detect because they do not generate SNMP traps. The POE daughter board fails, and the stacking cables fail. This year, I've had a dozen of each of these two types of failures across my 40 sites, last year rates were similar. This is about 5% of my total fleet each year.

In terms of POE failures, I will have switches that throw no errors, but will fail to deliver power to half or more of the ports on the switch. Only if I reboot the stack will I be able to coax out some errors, and even then, sometimes I have to actually do a firmware update for the errors to show. Below is an example of what that looks like, again, only after taking the entire switch stack out of production and rebooting it:

Code:
Port Admin Oper ---Power(mWatts)--- PD Type PD Class Pri Fault/
State State Consumed Allocated Error
--------------------------------------------------------------------------
5/1/1 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/2 On Non-PD 0 0 n/a n/a 3 n/a
5/1/3 On Non-PD 0 0 n/a n/a 3 n/a
5/1/4 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/5 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/6 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/7 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/8 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/9 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/10 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/11 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/12 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/13 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/14 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/15 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/16 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/17 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/18 On Non-PD 0 0 n/a n/a 3 n/a
5/1/19 On Non-PD 0 0 n/a n/a 3 n/a
5/1/20 On Non-PD 0 0 n/a n/a 3 n/a
5/1/21 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/22 On Non-PD 0 0 n/a n/a 3 n/a
5/1/23 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/24 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/25 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/26 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/27 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/28 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/29 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/30 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/31 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/32 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/33 On Non-PD 0 0 n/a n/a 3 n/a
5/1/34 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/35 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/36 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/37 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/38 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/39 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/40 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/41 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/42 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/43 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/44 On Non-PD 0 0 n/a n/a 3 n/a
5/1/45 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/46 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/47 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/48 On Non-PD 0 0 n/a n/a 3 n/a
-------------------------------------------------------------------------- Total 0 0
As a counter example, this is what a "healthy" switch reports:
Code:
Power Capacity:         Total is 1496000 mWatts. Current Free is 1420590 mWatts.

Power Allocations:      Requests Honored 48 times


Port   Admin   Oper    ---Power(mWatts)---  PD Type  PD Class  Pri  Fault/
        State   State   Consumed  Allocated                          Error
--------------------------------------------------------------------------
  1/1/1 On      Off            0          0  n/a      n/a         3  n/a
  1/1/2 On      Off            0          0  n/a      n/a         3  n/a
  1/1/3 On      Off            0          0  n/a      n/a         3  n/a
  1/1/4 On      Off            0          0  n/a      n/a         3  n/a
  1/1/5 On      Off            0          0  n/a      n/a         3  n/a
  1/1/6 On      Off            0          0  n/a      n/a         3  n/a
  1/1/7 On      Off            0          0  n/a      n/a         3  n/a
  1/1/8 On      Off            0          0  n/a      n/a         3  n/a
  1/1/9 On      Off            0          0  n/a      n/a         3  n/a
1/1/10 On      Off            0          0  n/a      n/a         3  n/a
1/1/11 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/12 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/13 On      Off            0          0  n/a      n/a         3  n/a
1/1/14 On      Off            0          0  n/a      n/a         3  n/a
1/1/15 On      Off            0          0  n/a      n/a         3  n/a
1/1/16 On      Off            0          0  n/a      n/a         3  n/a
1/1/17 On      Off            0          0  n/a      n/a         3  n/a
1/1/18 On      On          2600      15400  802.3af  n/a         3  n/a
1/1/19 Off     Off            0          0  n/a      n/a         3  n/a
1/1/20 On      On          6100      15400  802.3af  n/a         3  n/a
1/1/21 On      On          2600      15400  802.3af  n/a         3  n/a
1/1/22 On      Off            0          0  n/a      n/a         3  n/a
1/1/23 On      On          8400      13810  802.3af  Class 3     3  n/a
1/1/24 On      Off            0          0  n/a      n/a         3  n/a
1/1/25 On      Off            0          0  n/a      n/a         3  n/a
1/1/26 On      Off            0          0  n/a      n/a         3  n/a
1/1/27 On      Off            0          0  n/a      n/a         3  n/a
1/1/28 On      Off            0          0  n/a      n/a         3  n/a
1/1/29 On      Off            0          0  n/a      n/a         3  n/a
1/1/30 On      Off            0          0  n/a      n/a         3  n/a
1/1/31 On      Off            0          0  n/a      n/a         3  n/a
1/1/32 On      Off            0          0  n/a      n/a         3  n/a
1/1/33 On      Off            0          0  n/a      n/a         3  n/a
1/1/34 On      Off            0          0  n/a      n/a         3  n/a
1/1/35 On      Off            0          0  n/a      n/a         3  n/a
1/1/36 On      Off            0          0  n/a      n/a         3  n/a
1/1/37 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/38 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/39 On      Off            0          0  n/a      n/a         3  n/a
1/1/40 On      Off            0          0  n/a      n/a         3  n/a
1/1/41 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/42 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/43 On      On         10500      15400  802.3at  Class 4     3  n/a
1/1/44 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/45 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/46 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/47 On      Non-PD         0          0  n/a      n/a         3  n/a
1/1/48 On      Non-PD         0          0  n/a      n/a         3  n/a
--------------------------------------------------------------------------
Total                     30200      75410
A non-healthy switch which is affected by this problem will deliver power to some ports on the switch but not others. Generally, the first 12 ports will work and then the rest wont, or the inverse. In some cases, the switch will even be delivering power to devices but if you look at the inline power table, it will report no devices are even drawing power.

I've also experienced some other oddities with some crappy POE devices that work fine with Cisco or HP/Aruba switches. I have a non-isolated POE hat for my RPI 4s:
1629557381528.png

If I plug my RPIs into a 7450, or a 6450 the Pis power up and work fine. If I plug in a monitor to the RPIs, the entire POE subsystem fails, and every POE enabled port will stop delivering power. POE power will then work for a few seconds, then turn off again, in an infinite cycle. I've not tested this on a 7150, but suspect it to behave similarly. No other switch brand I've tested shares this behavior.

The second problem I have seen involves the stacking cable failures. I'll receive reports that certain computers in a building are not able to connect to the network. There is no information in the logs that indicate anything is wrong. All the ports are up, the spanning tree forward transaction numbers don't report any links flapping, and all stack members show up in the proper configuration like this:

1629557942857.png

However, one of the cables will be working intermittently, so using the graphic above, if the cable going from 1/4/1 to 2/3/1 goes bad It's hard to find the problem. The only troubleshooting that I have been able to do to determine the issue is that devices on switch 1 and 3 will be working fine, but the devices on switch 2 will not be working. When you have stacks as large as 7 or 8, this makes finding the problem difficult at best.

Anyway, I just wanted to share my exciting life with you all, I hope this helps someone someday. For homelab stuff I love Brocade. For small deployments, they are fantastic. For large deployments, please be wary. Luckily, they do have "lifetime warranties" on them, and so far, Ruckus has honored all of our RMA requests once we provide them proof the POE module has failed. We were lucky enough to have decommed a bunch of switches from a site that closed so we have stock we can rotate in while we await the RMAs. However, if this were not the case we would be in big trouble. A 5% failure rate may not sound like much, but the fact is that services are impacted randomly, unexpectedly, and without any meaningful alerts from the switches. 12 failed switches in the past year means as many as 576 devices stopped working for a day or more before we could diagnose and address the issue. When you are talking about telephones and other safety issues, this is just not an acceptable enterprise product.
 
Last edited:

JoJoMan

New Member
Jul 19, 2021
10
6
3
I spoke too soon, 1 of the pis works fine, the others seem to get some power (lights turn on on the pi) but the ethernet lights dont turn on, and it seems like the switch doesn't notice that it should be providing power.

here, port 27 is the one with the working pi, port 34 has a pi with some power(power lights turn on) but no ethernet lights:
Code:
1/1/27    On     On          4000      15400  802.3af  n/a         3  n/a
1/1/28    Off    Off            0          0  n/a      n/a         3  n/a
1/1/29    Off    Off            0          0  n/a      n/a         3  n/a
1/1/30    Off    Off            0          0  n/a      n/a         3  n/a
1/1/31    Off    Off            0          0  n/a      n/a         3  n/a
1/1/32    Off    Off            0          0  n/a      n/a         3  n/a
1/1/33    Off    Off            0          0  n/a      n/a         3  n/a
1/1/34    On     Off            0          0  n/a      n/a         3  n/a
I did some more messing around, and I think I have a similar issue to @nickf1227 , I can power 1 pi properly with POE
The other 2 just have some lights come on showing it's getting SOME power, seems like its not enough to boot though as they don't connect to the network, and the ethernet connection/activity lights never come on.

if I disconnect the properly working pi, it seems like it all of a sudden has enough power and one of the 2 not-working pis will start working.

Also, I double checked to make sure I have the latest firmware for everything, I installed the 08030u and updated POE firmware as well, just in case
 

aaroneaton

New Member
Jan 15, 2021
12
0
1
www.rfehosting.com
I have been a long time lurker here for a year or so now. Anyway have had a brocade icx6610 for about a year and a few months, and just barely getting into trying to use the qsfp ports with breakout to 10Gbps ports.

Anyway i am having issues getting them to work. I do not have stacking enabled, I have purchased the below from FS.com
Cisco QSFP-4SFP10G-CU3M 40G QSFP+ Breakout DAC Cable - FS
I was not able to get it working, so emailed them and they said i need a customized one, ordered that, and still no go.

My target end on each port is intel 10Gb nics in dell R620 servers.
reg fs DAC cables work on each server, but am getting no connection lights or anything using the breakout cable and each of its 4 ports.

Any other tips or tricks you can suggest? I have read through just about in this thread about getting breakout ports working, and not found my solution.

Looking forward to your reply.
 

itronin

Well-Known Member
Nov 24, 2018
1,234
794
113
Denver, Colorado
I have been a long time lurker here for a year or so now. Anyway have had a brocade icx6610 for about a year and a few months, and just barely getting into trying to use the qsfp ports with breakout to 10Gbps ports.

Anyway i am having issues getting them to work. I do not have stacking enabled, I have purchased the below from FS.com
Cisco QSFP-4SFP10G-CU3M 40G QSFP+ Breakout DAC Cable - FS
I was not able to get it working, so emailed them and they said i need a customized one, ordered that, and still no go.
please post your config. Yeah, I know you said you have stacking disabled - but worth a look.

For the rear connections:

I'm currently using AOC QSFP to SFP+ breakouts (generic/cisco) and they work great. I've tested generic QSFP to SFP+ DAC - worked fine too.
I've used netapp QSFP to QSFP for stacking on both the 40gbe and breakouts. - worked fine
I've used AOC QSFP to QSFP for switch to host - that also worked fine.

Nothing magical in the cables I've used - in fact they were the cheapest I could find of their respective types - configs were pretty much stock on the 1/2/x side.

when you say reg fs DAC cables work, going from the front sfp+ connections to the servers?

what did show int give you on the breakout ports when you were testing?

for example here's a snippet from my show run on a standalone icx6610 and as you can see nothing configured for my 1/2/x ports and they all work for hosts:

Code:
!
lag LAG41 dynamic id 41
 ports ethernet 1/2/1 ethernet 1/2/6 
 primary-port 1/2/6
 deploy
!
...
!
interface ethernet 1/1/24
 dual-mode  249
 inline power
!
interface ethernet 1/3/1
 speed-duplex 10G-full
!
2 10gbe in use, 40gbe configured but host is down at the moment. That's with a generic 40gbe to 10gbe AOC breakout.

Code:
SSH@icx6610-stack#show inter br ethe 1/2/1 to 1/2/10 

Port       Link    State   Dupl Speed Trunk Tag Pvid Pri MAC             Name
1/2/1      Down    None    None None  41    Yes N/A  0   748e.f8dc.ae80                 
1/2/2      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/3      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/4      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/5      Up      Forward Full 10G   None  Yes N/A  0   748e.f8dc.ae80                 
1/2/6      Down    None    None None  41    Yes N/A  0   748e.f8dc.ae80                 
1/2/7      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/8      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/9      Down    None    None None  None  Yes N/A  0   748e.f8dc.ae80                 
1/2/10     Up      Forward Full 10G   None  Yes N/A  0   748e.f8dc.ae80                 
SSH@icx6610-stack#
 

SuperMiguel

New Member
Jun 17, 2021
20
2
3
I have a ICX6610 48p PoE i have it connected to a Tripp Lite SMART1500LCD UPS, and every time the power goes out or even flickers the switch will go off, nothing else that is connected to the UPS turns off, only the switch, i have tried 1 PSU at the time, tried both connected to UPS, took one out, different port of the back of the UPS, and nothing seems to work, always the same problem... Any ideas? Suggestions?
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,319
800
113
I have a ICX6610 48p PoE i have it connected to a Tripp Lite SMART1500LCD UPS, and every time the power goes out or even flickers the switch will go off, nothing else that is connected to the UPS turns off, only the switch, i have tried 1 PSU at the time, tried both connected to UPS, took one out, different port of the back of the UPS, and nothing seems to work, always the same problem... Any ideas? Suggestions?
I have also noticed that the Brocade PSUs are a bit more susceptible to power issues than standard PSUs. Which revision do you have? I will check with ours ASAP

EDIT: According to the datasheet, your Tripp Lite UPS outputs a modified sinewave... That might be the issue.
High end PSUs typically don't like non sinusodial voltage inputs, whereas cheaper PSUs (with just passive PFC for example) can bear with it...
 
Last edited:

SuperMiguel

New Member
Jun 17, 2021
20
2
3
I have also noticed that the Brocade PSUs are a bit more susceptible to power issues than standard PSUs. Which revision do you have? I will check with ours ASAP

EDIT: According to the datasheet, your Tripp Lite UPS outputs a modified sinewave... That might be the issue.
High end PSUs typically don't like non sinusodial voltage inputs, whereas cheaper PSUs (with just passive PFC for example) can bear with it...
I have the revision A.

"your Tripp Lite UPS outputs a modified sinewave" is there anything I can do about it? Or basically get another UPS?
 
Last edited:

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
Hi guys @JoJoMan @nick5768

Have you tried this command ?

Enables support for Power over Ethernet (PoE) legacy power-consuming devices.
Code:
legacy-inline-power
Hello,
Yes. That specific command helps in some specific scenarios, like installing very old IP telephones running very old firmwares. Think Cisco 7940.

The problems I am describing is that under normal circumstances with 802.3af/at compliant devices I've had multiple failures of the POE daughter board on my 7450s and to a lesser extend my 6450s.

The other problem I've noticed is that non-isolated POE devices (which, in fairness, is out of spec) cause havoc on Brocade switches. Cisco and HP Procurve switches handle Pis with non-isolated POE hats without issues, whereas on Brocade if ANY OTHER DEVICES is plugged into a Pi with that hat on, the entire POE daughter board barfs all over itself and stops delivering power to all ports in an infinite loop.

It's not that the Brocade switches aren't any good, it's that they are much more delicate than other enterprise grade switches I have experience with. I've had Cisco 6509s and 4506s running in rooms that are over 100 degrees during summer months for years. I've found HP 2910 and 2920 switches running in ceilings, inside of wooden cabinets with closed doors and no airflow. I have a fleet of Brocade 7450s, some of them are in rooms with dedicated CRAC units and others without, all of them are running behind Eaton 9K or 5KUPSs...but in either case I still see POE module failures.
 
Last edited:

noduck

Member
Sep 12, 2020
38
10
8
just tried this, no dice

When the 1 pi shows it, it shows as using 802.3af so shouldn't need legacy power anyway
I did some more messing around, and I think I have a similar issue to @nickf1227 , I can power 1 pi properly with POE
The other 2 just have some lights come on showing it's getting SOME power, seems like its not enough to boot though as they don't connect to the network, and the ethernet connection/activity lights never come on.

if I disconnect the properly working pi, it seems like it all of a sudden has enough power and one of the 2 not-working pis will start working.

Also, I double checked to make sure I have the latest firmware for everything, I installed the 08030u and updated POE firmware as well, just in case
I have no issue powering multiple Pis from both a 7250-24p and a c7150-12 (3 each, 6 total). I am not using any POE hat, but external POE adapters:
 

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
The Brocades will power literally as many pis as you can plug into them, but they have to have ISOLATED POE hats. I can't speak for the USB adapter above, but I trust they will probably work fine if that poster is using them. POE Texas is fantastic
 
  • Like
Reactions: JoJoMan

JoJoMan

New Member
Jul 19, 2021
10
6
3
The Brocades will power literally as many pis as you can plug into them, but they have to have ISOLATED POE hats. I can't speak for the USB adapter above, but I trust they will probably work fine if that poster is using them. POE Texas is fantastic
this is probably my issue then, the hat I am using is non-isolated.
I ordered some new isolated hats and will update when those arrive.
 

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
this is probably my issue then, the hat I am using is non-isolated.
I ordered some new isolated hats and will update when those arrive.
Best of luck, please report back your findings and model you chose.
This model should work fine: Amazon.com: LoveRPi Power-Over-Ethernet (PoE) HAT for Raspberry Pi 4 Model B and Raspberry Pi 3 Model B+ (Professional, Isolated (3KV)) : Electronics

Just for those who are lurking here, "non-isolated POE" is literally not compliant with 802.3 so in Brocade's defense it's not really their bad. It's just disappointing that the other major brands can handle it without issue.
Navigating the IEEE 802.3af Standard for PoE | Power Electronics

Some UniFi switches had a similar problem:
1629775784330.png

The actual standard specifies the following:
1629776015720.png

I'm not sure exactly in the Brocade version of the POE daughter board causes the inability to work properly with the crappy POE hats. Brocade claims that they are IEEE 802.3af/at compliant, which would include the above provision on the PSE side. I'm not an electrical engineer, but the behavior I have had and JoJoMan is having makes me curious...I'm not sure how power feedback from one POE device should have any affect on other POE devices if the switches are following the standard properly.