Well to add another idea, I just used twist ties to secure it to the Poe board above it which held it loosely in place. Didn't want to make any permanent modifications and it still worked excellently.
hot glue to the top case, I’m assuming.
Lots of good ideas, thanks guys. I’m leaning towards the twist ties to the ASIC radiator. It sounds easier to avoid ripping the cable out when removing the top case later on
PFSense handles all routing with the exception of traffic between two VLANs with 10G clients. The IoT VLAN (VLAN 2) that contains my Phillips Hue hub is routed by PFSense. It's accessed primarily by my trusted WPA3-Enterprise WLAN network (VLAN 11) and limited WLAN network (VLAN 9). Even with 10G traffic, PFSense had no issues routing traffic at wirespeed using jumbo packets at 10G (~10% CPU utilization). Given that it's easier to manage all the firewall rules in one place in PFSense, I'm planning to move all my 10G and 40G clients to one VLAN for simplicity, as shown below.
PFSense Hardware:
Xeon E5-1220v2 3.1Ghz
8GB DDR3
Intel X520-DA2 with two SFP+ ports
FS QSFP+ to 4x SFP+ breakout cable connects from the X520-DA2 to my ICX6610-48p for WAN and LAN
LAN contains untagged traffic for the trusted LAN and 14 tagged VLANs
PFSense is the default gateway (0.0.0.0/0). Static routes are setup to handle inter-VLAN traffic. Say you have VLAN2 on subnet 192.168.2.0/24 with a VE IP of 192.168.2.250 and VLAN3 on 192.168.3.0/24 with a VE IP of 192.168.3.250. If you want a client on VLAN2 to route traffic via the switch, you setup a static route on the VLAN2 client to route all traffic destined for 192.168.3.0/24 via 192.168.2.250. On the VLAN3 clients, you setup a static route to route all traffic destined for 192.168.2.0/24 via 192.168.3.250. Internet traffic continues to go through the default gateway (pfsense router).
hot glue to the top case, I’m assuming.
Lots of good ideas, thanks guys. I’m leaning towards the twist ties to the ASIC radiator. It sounds easier to avoid ripping the cable out when removing the top case later on
To clarify, I hot glued the fan to the ASIC heat sink, not to the top case. I imagine that if you glued the fan to the top case, there would be no path for air to pass between the fan and the top case.
PFSense is the default gateway (0.0.0.0/0). Static routes are setup to handle inter-VLAN traffic. Say you have VLAN2 on subnet 192.168.2.0/24 with a VE IP of 192.168.2.250 and VLAN3 on 192.168.3.0/24 with a VE IP of 192.168.3.250. If you want a client on VLAN2 to route traffic via the switch, you setup a static route on the VLAN2 client to route all traffic destined for 192.168.3.0/24 via 192.168.2.250. On the VLAN3 clients, you setup a static route to route all traffic destined for 192.168.2.0/24 via 192.168.3.250. Internet traffic continues to go through the default gateway (pfsense router).
hot glue to the top case, I’m assuming.
Lots of good ideas, thanks guys. I’m leaning towards the twist ties to the ASIC radiator. It sounds easier to avoid ripping the cable out when removing the top case later on
I finally had some time to try to get my 6610 up and running... I updated the main FW without issue but trying to get the POE board updated yields the following (see log below). It just stays stuck in this loop forever. I found another thread about this but in his case, the POE board had a blown IC on it, which is not the case for my board. I took a quick look and everything seems ok as far as I can tell. I reseated the connectors between the POE board and the main board and it did not improve.
Code:
ICX6610-24P Router#inline power install-firmware stack-unit 1 tftp 192.168.1.32 ICX6610-FCX/fcx_poeplus_02.1.0.b004.fw
ICX6610-24P Router#Flash Memory Write (8192 bytes per dot) .........
tftp download successful file name = poe-fw
Sending PoE Firmware to Unit 1.
Firmware Update failed
PoE Info: Resetting module in slot 1....completed.
PoE Error: Device 0 failed to start on PoE module.
PoE Error: Device 1 failed to start on PoE module.
Resetting module in slot 1 again to recover from dev fault
PoE Info: Hard Resetting in slot 1....
PoE Info: Resetting module in slot 1....completed.
PoE Error: Device 0 failed to start on PoE module.
PoE Error: Device 1 failed to start on PoE module.
Resetting module in slot 1 again to recover from dev fault
PoE Info: Hard Resetting in slot 1....
Interestingly I initially tried to connect my laptop to RJ45 ports 1 couldn't connect (no status light), tried port 2 the same, finally I tried port 8 which works. I've only checked quickly but ports 1-4 appear totally dead. This feels like a hardware issue, I will see if I can find any damage near ports 1-4. Does anyone have any idea to what device 0 and device 1 correspond, maybe this will give me some hint as to the issue?
Good news/bad news if anyone is interested, I took another look at the board today and I found the following:
Looks like possibly some sort of liquid damage near the power connector on the POE board
R27 and R201 are 0 ohm resistors but appeared burnt, measured as open so they are definitely toast
F45, F20 measure open (they're fuses so should measure 0)
F51 near PoE controller U6 measures open
Confirmed that ports 1-4 are dead, I'm not sure I want to take the board off the chassis and investigate if there is any damage in there. I think I'll pass
Aside from this, my unit has 2x Rev A power supplies and 2x Fan modules. This switch is way louder than I was hoping even with no POE load, I can hear it through the ceiling of my mechanical room into the living room above it (my house has admittedly little/no sound insulation).
More or less, most commonly they’re used for debugging and/or optional components. Debugging PCBs with a bunch of internal layers is no fun, adding a few 0 ohm resistors makes it more tolerable…
theoretically, although I can't remember if the non-poe models have the PoE daughterboard header populated with a socket or not. you'd need PoE power supplies too obviously. in the current market it's probably way cheaper to just buy a complete poe model
The NON-POE models do not have the headers on them. The headers are standard Digikey parts and can be found. I just desoldered from my dead board and put them on.
The good news, after a reflash with everything soldered on and the PoE board swapped in, it IS detected as a POE model.
The bad news, I can't get it to actually power anything on. The POE commands work just fine. The same cable and device power up in my official PoE device.
I'm not sure what the issue is, but I'd be open to trying some hackery.
Well, I guess I'm late to the party, but I just joined the ICX club. Snagged what looked like a great deal on two 7250-48P units...
... but two fans were DOA (missing fan blades, the noise and vibration was unbelievable!!) and the corner of one unit was completely smashed in (looked like it was dropped on the corner). They were sold as "tested, light scratches from regular use, and factory reset; ready for re-use!", of course. Luckily everything works fine, I'll just have to replace some fans (I'll probably just replace them all as a precaution) and take the board out of the dropped unit to try to fix the smashed up corner (front-left, near the console port). Interestingly, they were definitely not factory reset; running no password in the bootloader let me log in and take a look at the existing configuration. The eBay seller understandably didn't know much about these switches, but certainly should have noticed the broken fans & physical damage; they clearly were not "tested."
Anyways, I had an interesting issue when flashing the updated firmware on one of the two units. Since I can't find this exact issue in the thread yet, I figured I'd document it in case someone else runs into it:
After updating uboot to 10.1.18, the switch would correctly reboot into 10.1.18. Updating the primary image with update_primary worked just fine but the switch would reset when running boot_primary instead of simply booting into the newly flashed primary image. The strange thing was after the unprompted reset, the bootloader would revert to 10.1.06T215 and thus couldn't boot the new primary image ("ERROR: can't get kernel image!", as reported previously in this thread).
I repeated this a few times thinking I had missed a step since the same process worked perfectly on the first unit (same existing bootloader & OS version). I've attached the text from the console if anyone wants to take a look; note the "resetting ..." just after boot_primary (lines 37-38) and subsequent boot with bootloader 10.1.06T215.
The resolution was rather simple: update uboot again after running update_primary to update the primary image.
Finally, boot the primary image and continue flashing UFI image as you normally would.
Out of curiosity, I played around between #1 and #3 a few times. Resetting and power cycling after updating uboot but before running update_primary wouldn't revert the bootloader; it would continue to boot to the newly flashed 10.1.18. It was only after running update_primary that any reset thereafter reverted the bootloader to 10.1.06T215.
Both switches originally had bootloader 10.1.06T215 and were running 8.0.30 but strangely this only happened on one of the two. The indicated build date for the 10.1.06T215 bootloader is December 14, 2015. Running 8.0.30, I would have assumed they'd be using a newer bootloader, but I guess not?
Now that the UFI image has been flashed, everything seems to be operating normally. Is there a backup/failsafe version of the bootloader stored in flash somewhere that's used in the event that the primary bootloader is corrupt or otherwise broken?
I bought a 6610-48-e with a single PS & fan.. Looked to get another ps/fan for redundancy...but was basically the same price as a switch with a single ps/fan. LOL.
I did order a spare set of extra power supplies as well.
The 2 switches (1 with 2ps, 2 fan, the other as a spare with 2ps, 0 fan) + the 2 extra ps are now I'm in $260
Found a deal on a poe version with 2 ps+2 fan (as I need a solution for poe as well) and it was $150 shipped. You guys are bankrupting me
Now I'm trying to decide if I should do a stack. I need to read up on exactly what I need to do that.
Also: For those of you doing MST for spanning tree, the 3963->4096 vlans on the Brocades can NOT BE ON ANY MST INSTANCE OUTSIDE OF 0. This was a surprise & dissapointing.
The NON-POE models do not have the headers on them. The headers are standard Digikey parts and can be found. I just desoldered from my dead board and put them on.
The good news, after a reflash with everything soldered on and the PoE board swapped in, it IS detected as a POE model.
The bad news, I can't get it to actually power anything on. The POE commands work just fine. The same cable and device power up in my official PoE device.
I'm not sure what the issue is, but I'd be open to trying some hackery.
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
that sounds like the same behavior I've seen when a legit PoE model is powered up with non-poe power supplies, EG a symptom of the PoE board getting the nominal 12/5/3 volt power for all the ICs to come up and be recognized, but no 56v rail from the PSUs or board to actually supply PoE to devices. maybe compare your non-poe board (that now has poe headers soldered) to an actual PoE board around the PSU backplane connector area, perhaps the non-poe mainboards don't bother with traces from where the 56v pins would be from the psu or something
So I actually used a PoE PSU from my known working unit. The PoE daughterboard IS getting 56V since its connected by an internal cable. I'm wondering if maybe those ports are bad, or maybe there are some jumpers somewhere that need to be moved over, but that doesn't make a lot of sense from a manufacturing perspective.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.