So, as a pretty invested Brocade enthusiast personally and as a network admin who has quite a few in production professionally, I have to once again share my disappointment with the design of the 7450s.
I've been with my current employer for 2 years. In 2016, they purchased approximately 400 7450s to cover nearly all of my sites. Older firmware on the 8070 train were awful, but that aside, there are fundamental hardware issues that plagues this line of switches that I cannot ignore.
Unlike alot of Cisco or HP/Aruba offerings, Brocade has traditionally taken off-the-shelf componentry and integrated them into their products, or they have built their products in a modular fashion. As an example, stacking cables are merely QSFP+ cables, the POE versions of the switches share the same motherboard as the NON-POE versions and have a separate daughter board, the VDX line has 10gig copper versions that report each port as an Aquantia RJ45 SFP+ transceiver.
In terms of the 7450s, I've experienced two common failures that are relatively difficult to detect because they do not generate SNMP traps. The POE daughter board fails, and the stacking cables fail. This year, I've had a dozen of each of these two types of failures across my 40 sites, last year rates were similar. This is about 5% of my total fleet each year.
In terms of POE failures, I will have switches that throw no errors, but will fail to deliver power to half or more of the ports on the switch. Only if I reboot the stack will I be able to coax out some errors, and even then, sometimes I have to actually do a firmware update for the errors to show. Below is an example of what that looks like, again, only after taking the entire switch stack out of production and rebooting it:
Code:
Port Admin Oper ---Power(mWatts)--- PD Type PD Class Pri Fault/
State State Consumed Allocated Error
--------------------------------------------------------------------------
5/1/1 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/2 On Non-PD 0 0 n/a n/a 3 n/a
5/1/3 On Non-PD 0 0 n/a n/a 3 n/a
5/1/4 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/5 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/6 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/7 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/8 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/9 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/10 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/11 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/12 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/13 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/14 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/15 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/16 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/17 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/18 On Non-PD 0 0 n/a n/a 3 n/a
5/1/19 On Non-PD 0 0 n/a n/a 3 n/a
5/1/20 On Non-PD 0 0 n/a n/a 3 n/a
5/1/21 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/22 On Non-PD 0 0 n/a n/a 3 n/a
5/1/23 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/24 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/25 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/26 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/27 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/28 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/29 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/30 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/31 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/32 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/33 On Non-PD 0 0 n/a n/a 3 n/a
5/1/34 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/35 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/36 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/37 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/38 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/39 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/40 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/41 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/42 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/43 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/44 On Non-PD 0 0 n/a n/a 3 n/a
5/1/45 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/46 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/47 On Off 0 0 n/a n/a 3 internal h/w fault
5/1/48 On Non-PD 0 0 n/a n/a 3 n/a
-------------------------------------------------------------------------- Total 0 0
As a counter example, this is what a "healthy" switch reports:
Code:
Power Capacity: Total is 1496000 mWatts. Current Free is 1420590 mWatts.
Power Allocations: Requests Honored 48 times
Port Admin Oper ---Power(mWatts)--- PD Type PD Class Pri Fault/
State State Consumed Allocated Error
--------------------------------------------------------------------------
1/1/1 On Off 0 0 n/a n/a 3 n/a
1/1/2 On Off 0 0 n/a n/a 3 n/a
1/1/3 On Off 0 0 n/a n/a 3 n/a
1/1/4 On Off 0 0 n/a n/a 3 n/a
1/1/5 On Off 0 0 n/a n/a 3 n/a
1/1/6 On Off 0 0 n/a n/a 3 n/a
1/1/7 On Off 0 0 n/a n/a 3 n/a
1/1/8 On Off 0 0 n/a n/a 3 n/a
1/1/9 On Off 0 0 n/a n/a 3 n/a
1/1/10 On Off 0 0 n/a n/a 3 n/a
1/1/11 On Non-PD 0 0 n/a n/a 3 n/a
1/1/12 On Non-PD 0 0 n/a n/a 3 n/a
1/1/13 On Off 0 0 n/a n/a 3 n/a
1/1/14 On Off 0 0 n/a n/a 3 n/a
1/1/15 On Off 0 0 n/a n/a 3 n/a
1/1/16 On Off 0 0 n/a n/a 3 n/a
1/1/17 On Off 0 0 n/a n/a 3 n/a
1/1/18 On On 2600 15400 802.3af n/a 3 n/a
1/1/19 Off Off 0 0 n/a n/a 3 n/a
1/1/20 On On 6100 15400 802.3af n/a 3 n/a
1/1/21 On On 2600 15400 802.3af n/a 3 n/a
1/1/22 On Off 0 0 n/a n/a 3 n/a
1/1/23 On On 8400 13810 802.3af Class 3 3 n/a
1/1/24 On Off 0 0 n/a n/a 3 n/a
1/1/25 On Off 0 0 n/a n/a 3 n/a
1/1/26 On Off 0 0 n/a n/a 3 n/a
1/1/27 On Off 0 0 n/a n/a 3 n/a
1/1/28 On Off 0 0 n/a n/a 3 n/a
1/1/29 On Off 0 0 n/a n/a 3 n/a
1/1/30 On Off 0 0 n/a n/a 3 n/a
1/1/31 On Off 0 0 n/a n/a 3 n/a
1/1/32 On Off 0 0 n/a n/a 3 n/a
1/1/33 On Off 0 0 n/a n/a 3 n/a
1/1/34 On Off 0 0 n/a n/a 3 n/a
1/1/35 On Off 0 0 n/a n/a 3 n/a
1/1/36 On Off 0 0 n/a n/a 3 n/a
1/1/37 On Non-PD 0 0 n/a n/a 3 n/a
1/1/38 On Non-PD 0 0 n/a n/a 3 n/a
1/1/39 On Off 0 0 n/a n/a 3 n/a
1/1/40 On Off 0 0 n/a n/a 3 n/a
1/1/41 On Non-PD 0 0 n/a n/a 3 n/a
1/1/42 On Non-PD 0 0 n/a n/a 3 n/a
1/1/43 On On 10500 15400 802.3at Class 4 3 n/a
1/1/44 On Non-PD 0 0 n/a n/a 3 n/a
1/1/45 On Non-PD 0 0 n/a n/a 3 n/a
1/1/46 On Non-PD 0 0 n/a n/a 3 n/a
1/1/47 On Non-PD 0 0 n/a n/a 3 n/a
1/1/48 On Non-PD 0 0 n/a n/a 3 n/a
--------------------------------------------------------------------------
Total 30200 75410
A non-healthy switch which is affected by this problem will deliver power to some ports on the switch but not others. Generally, the first 12 ports will work and then the rest wont, or the inverse. In some cases, the switch will even be delivering power to devices but if you look at the inline power table, it will report no devices are even drawing power.
I've also experienced some other oddities with some crappy POE devices that work fine with Cisco or HP/Aruba switches. I have a non-isolated POE hat for my RPI 4s:
If I plug my RPIs into a
7450, or a 6450 the Pis power up and work fine. If I plug in a monitor to the RPIs, the entire POE subsystem fails, and every POE enabled port will stop delivering power. POE power will then work for a few seconds, then turn off again, in an infinite cycle. I've not tested this on a 7150, but suspect it to behave similarly. No other switch brand I've tested shares this behavior.
The second problem I have seen involves the stacking cable failures. I'll receive reports that certain computers in a building are not able to connect to the network. There is no information in the logs that indicate anything is wrong. All the ports are up, the spanning tree forward transaction numbers don't report any links flapping, and all stack members show up in the proper configuration like this:
However, one of the cables will be working intermittently, so using the graphic above, if the cable going from 1/4/1 to 2/3/1 goes bad It's hard to find the problem. The only troubleshooting that I have been able to do to determine the issue is that devices on switch 1 and 3 will be working fine, but the devices on switch 2 will not be working. When you have stacks as large as 7 or 8, this makes finding the problem difficult at best.
Anyway, I just wanted to share my exciting life with you all, I hope this helps someone someday. For homelab stuff I love Brocade. For small deployments, they are fantastic. For large deployments, please be wary. Luckily, they do have "lifetime warranties" on them, and so far, Ruckus has honored all of our RMA requests once we provide them proof the POE module has failed. We were lucky enough to have decommed a bunch of switches from a site that closed so we have stock we can rotate in while we await the RMAs. However, if this were not the case we would be in big trouble. A 5% failure rate may not sound like much, but the fact is that services are impacted randomly, unexpectedly, and without any meaningful alerts from the switches. 12 failed switches in the past year means as many as 576 devices stopped working for a day or more before we could diagnose and address the issue. When you are talking about telephones and other safety issues, this is just not an acceptable enterprise product.