Good news/bad news if anyone is interested, I took another look at the board today and I found the following:
Looks like possibly some sort of liquid damage near the power connector on the POE board
R27 and R201 are 0 ohm resistors but appeared burnt, measured as open so they are definitely toast
F45, F20 measure open (they're fuses so should measure 0)
F51 near PoE controller U6 measures open
Confirmed that ports 1-4 are dead, I'm not sure I want to take the board off the chassis and investigate if there is any damage in there. I think I'll pass
Aside from this, my unit has 2x Rev A power supplies and 2x Fan modules. This switch is way louder than I was hoping even with no POE load, I can hear it through the ceiling of my mechanical room into the living room above it (my house has admittedly little/no sound insulation).
More or less, most commonly they’re used for debugging and/or optional components. Debugging PCBs with a bunch of internal layers is no fun, adding a few 0 ohm resistors makes it more tolerable…
theoretically, although I can't remember if the non-poe models have the PoE daughterboard header populated with a socket or not. you'd need PoE power supplies too obviously. in the current market it's probably way cheaper to just buy a complete poe model
The NON-POE models do not have the headers on them. The headers are standard Digikey parts and can be found. I just desoldered from my dead board and put them on.
The good news, after a reflash with everything soldered on and the PoE board swapped in, it IS detected as a POE model.
The bad news, I can't get it to actually power anything on. The POE commands work just fine. The same cable and device power up in my official PoE device.
I'm not sure what the issue is, but I'd be open to trying some hackery.
Well, I guess I'm late to the party, but I just joined the ICX club. Snagged what looked like a great deal on two 7250-48P units...
... but two fans were DOA (missing fan blades, the noise and vibration was unbelievable!!) and the corner of one unit was completely smashed in (looked like it was dropped on the corner). They were sold as "tested, light scratches from regular use, and factory reset; ready for re-use!", of course. Luckily everything works fine, I'll just have to replace some fans (I'll probably just replace them all as a precaution) and take the board out of the dropped unit to try to fix the smashed up corner (front-left, near the console port). Interestingly, they were definitely not factory reset; running no password in the bootloader let me log in and take a look at the existing configuration. The eBay seller understandably didn't know much about these switches, but certainly should have noticed the broken fans & physical damage; they clearly were not "tested."
Anyways, I had an interesting issue when flashing the updated firmware on one of the two units. Since I can't find this exact issue in the thread yet, I figured I'd document it in case someone else runs into it:
After updating uboot to 10.1.18, the switch would correctly reboot into 10.1.18. Updating the primary image with update_primary worked just fine but the switch would reset when running boot_primary instead of simply booting into the newly flashed primary image. The strange thing was after the unprompted reset, the bootloader would revert to 10.1.06T215 and thus couldn't boot the new primary image ("ERROR: can't get kernel image!", as reported previously in this thread).
I repeated this a few times thinking I had missed a step since the same process worked perfectly on the first unit (same existing bootloader & OS version). I've attached the text from the console if anyone wants to take a look; note the "resetting ..." just after boot_primary (lines 37-38) and subsequent boot with bootloader 10.1.06T215.
The resolution was rather simple: update uboot again after running update_primary to update the primary image.
Finally, boot the primary image and continue flashing UFI image as you normally would.
Out of curiosity, I played around between #1 and #3 a few times. Resetting and power cycling after updating uboot but before running update_primary wouldn't revert the bootloader; it would continue to boot to the newly flashed 10.1.18. It was only after running update_primary that any reset thereafter reverted the bootloader to 10.1.06T215.
Both switches originally had bootloader 10.1.06T215 and were running 8.0.30 but strangely this only happened on one of the two. The indicated build date for the 10.1.06T215 bootloader is December 14, 2015. Running 8.0.30, I would have assumed they'd be using a newer bootloader, but I guess not?
Now that the UFI image has been flashed, everything seems to be operating normally. Is there a backup/failsafe version of the bootloader stored in flash somewhere that's used in the event that the primary bootloader is corrupt or otherwise broken?
I bought a 6610-48-e with a single PS & fan.. Looked to get another ps/fan for redundancy...but was basically the same price as a switch with a single ps/fan. LOL.
I did order a spare set of extra power supplies as well.
The 2 switches (1 with 2ps, 2 fan, the other as a spare with 2ps, 0 fan) + the 2 extra ps are now I'm in $260
Found a deal on a poe version with 2 ps+2 fan (as I need a solution for poe as well) and it was $150 shipped. You guys are bankrupting me
Now I'm trying to decide if I should do a stack. I need to read up on exactly what I need to do that.
Also: For those of you doing MST for spanning tree, the 3963->4096 vlans on the Brocades can NOT BE ON ANY MST INSTANCE OUTSIDE OF 0. This was a surprise & dissapointing.
The NON-POE models do not have the headers on them. The headers are standard Digikey parts and can be found. I just desoldered from my dead board and put them on.
The good news, after a reflash with everything soldered on and the PoE board swapped in, it IS detected as a POE model.
The bad news, I can't get it to actually power anything on. The POE commands work just fine. The same cable and device power up in my official PoE device.
I'm not sure what the issue is, but I'd be open to trying some hackery.
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
So I actually used a PoE PSU from my known working unit. The PoE daughterboard IS getting 56V since its connected by an internal cable. I'm wondering if maybe those ports are bad, or maybe there are some jumpers somewhere that need to be moved over, but that doesn't make a lot of sense from a manufacturing perspective.
Its possible. They didn't populate the connectors, but those are typically very expensive components, and two of them are through hole (not something that a PnP can do very easily. The only thing I can think of is maybe they have some 0 ohm resistors somewhere, but a quick visual didn't show anything. The board is MASSIVE so I could have missed something.
My first check would probably be to make sure that the Microsemi ICs on the POE board are all getting power, I think they're all 3.3V VDD ICs and on the top side so that should be easy enough to probe. I had dug up the datasheets to figure out which pin is which when I was trying to troubleshoot my switch a few days ago but I didn't save them.
Checking for movement on the I2C pins on the POE board after that would be ideal but I think that's going to be hard to do without a scope which is not something people typically have at home!
edit: have you considered moving the non-functional POE board to the working POE switch for a quick test to see if it works there?
My first check would probably be to make sure that the Microsemi ICs on the POE board are all getting power, I think they're all 3.3V VDD ICs and on the top side so that should be easy enough to probe. I had dug up the datasheets to figure out which pin is which when I was trying to troubleshoot my switch a few days ago but I didn't save them.
Checking for movement on the I2C pins on the POE board after that would be ideal but I think that's going to be hard to do without a scope which is not something people typically have at home!
edit: have you considered moving the non-functional POE board to the working POE switch for a quick test to see if it works there?
As it so happens, I DO have a 100mhz scope and an old logic analyzer sitting next to my healing bench.
I have considered doing the POE board swap the next chance I can to take the core network down. Might be a while for that though.
Since they're responding to commands (I've seen the error when it doesn't due to a missing resistor I had to replace) I'd think its maybe port 1 or 2 (the only ones I've tried so far) that have the issue.
I'll get the others turned on and see if they're fine or broken.
The unit the PoE board came from died unexpectedly, and even with @fohdeesha 's magic JTAG box, we weren't able to bring it back.
As it so happens, I DO have a 100mhz scope and an old logic analyzer sitting next to my healing bench.
I have considered doing the POE board swap the next chance I can to take the core network down. Might be a while for that though.
Since they're responding to commands (I've seen the error when it doesn't due to a missing resistor I had to replace) I'd think its maybe port 1 or 2 (the only ones I've tried so far) that have the issue.
I'll get the others turned on and see if they're fine or broken.
The unit the PoE board came from died unexpectedly, and even with @fohdeesha 's magic JTAG box, we weren't able to bring it back.
If you see my post above, ports 1-4 are dead on my 24P switch but others seem to be OK (haven't tested POE since the POE board in my switch was fried beyond the amount of effort I was willing to put into fix it). I assume you do have ethernet comms (non-POE) on ports 1 and 2 on this switch? Might be worth trying the other ports since it's pretty quick and low effort compared to other troubleshooting
@fohdeesha I tried following your configuration on my ICX 7450 but the primary boot doesn't seem to work, any ideas? This is what I get. It seems like it is still using the 1.1.05 bootloader when it tries to load the primary, this seems wrong?
Code:
Brocade Bootloader: 10.1.05T215 (Mar 19 2015 - 16:39:20)
Validate Shmoo parameters stored in flash ..... OK
Restoring Shmoo parameters from flash .....
Running simple memory test ..... OK
ICX7450-24 Copper (POE), PVT1
SYS CPLD VER: 0x10, Released Ver: 0
Enter 'b' to stop at boot monitor: 0
bootdelay: ===
Booting image from Primary
.......................................................................................................................................................................................................................................................................................................................................Wrong Image Format for bootm command
ERROR: can't get kernel image!
could not boot from primary, no valid image; trying to boot from secondary
BOOTING image from Secondary
System initialization completed...console going online.
Copyright (c) 1996-2015 Brocade Communications Systems, Inc. All rights reserved.
UNIT 1: compiled on Jan 26 2016 at 22:35:15 labeled as SPR08030f
(31662276 bytes) from Secondary SPR08030f.bin
SW: Version 08.0.30fT213
Compressed Boot-Monitor Image size = 786944, Version:10.1.05T215 (spz10118)
Boots fine into the secondary image though...
edit: it did work previous to the update however (edit2 removed the log, not necessary).
edit 2: OK fixed the problem. I had to run update_uboot immediately before running update_primary otherwise I kept getting the old bootloader after reset. This fixed my issue where the primary boot kept failing because it was using bootloader 10.1.05 instead of 10.1.18 with the newer firmware. The instructions probably could use this minor update
Has anyone tried stacking ICX switches with SFP+ 10GBASE-T transceivers? If not, can someone with an existing stack and some spare 10GBASE-T transceivers give it a try?
I was planning on running the two switches I purchased in a stack but the cabling between the switches is unfortunately only CAT6; running fiber is possible, but not easy. Thankfully the existing cabling between the two locations is SSTP CAT6 and the distance shouldn't be an issue.
I purchased a 4-pack of QSFPTEK SFP+ 10GBASE-T transceivers and they all appear to work perfectly between NICs and switches of various brands. Connecting the two switches together with CAT6 seems to work perfectly as well if stacking isn't enabled or the transceivers are in non-stacking ports; no dropped packets with intense iperf testing, rated 10Gb speeds.
As soon as stacking is enabled, the ports refuse to come up, regardless if a stack is created or not. If the 10GBASE-T transceivers are plugged into the stacking ports on either switch and communicating perfectly, simply running "stack enable" causes them to flap and then drop out.
My current assumption is that the transceivers first connect at 1Gb (or some NBASE-T speed since they're multi-gigabit compatible) before switching to 10Gb and since 10Gb is required for stacking, the switch resets the port a few times before giving up. I'm hoping this is simply an incompatibility with these specific transceivers and another make/model transceiver will work.
If anyone has a different brand set and can take a few minutes to test for me, I would really appreciate it! Likewise, is anyone aware of any configuration changes I may be able to make to resolve this?
edit 2: OK fixed the problem. I had to run update_uboot immediately before running update_primary otherwise I kept getting the old bootloader after reset. This fixed my issue where the primary boot kept failing because it was using bootloader 10.1.05 instead of 10.1.18 with the newer firmware.
Seems similar to what I ran into above with a 7250 that was running the 10.1.06 bootloader (similar 2015 build date). The uboot update didn't stick until after booting into the newly primary image and the only way to boot into the new primary image was to re-flash the new bootloader after flashing the primary image. Weird chicken & egg problem.
As a general advice, avoid stacking unless more ports are needed in the same IDF. Definitely I wouldn't span stacked switches across multiple floors or buildings. Let L3 do it's job, the smaller your L2 domains are, the better.
As a general advice, avoid stacking unless more ports are needed in the same IDF. Definitely I wouldn't span stacked switches across multiple floors or buildings. Let L3 do it's job, the smaller your L2 domains are, the better.
Interesting. I got a 7250 a few months back and have been planning to get another one for redundancy - in a stack. Are you saying it's a bad idea? I don't really need the ports.
Interesting. I got a 7250 a few months back and have been planning to get another one for redundancy - in a stack. Are you saying it's a bad idea? I don't really need the ports.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.