Drag to reposition cover

Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

fohdeesha

Kaini Industries
Nov 20, 2016
2,577
2,775
113
31
fohdeesha.com
@kpfleming thanks! Trialing it right now. Got one Rpi and VIP setup. Ip helper pointed only to the VIP. Second one will be proof of the pudding later this week. Fingers crossed. Thanks for all the help!

there's no need to dick around with VRRP, fastiron (and almost every other enterprise switch) supports forwarding dhcp broadcasts to multiple IPs/DHCP servers directly, then the HA component of ISC-DHCP handles which one is active (replies to the forwarded broadcasts) and which one is standby (ignores them):

Code:
interface ve 10
 ip address 192.168.1.1 255.255.255.0
 ip helper-address 1 172.16.110.2
 ip helper-address 2 172.16.110.3
 ipv6 address xxxx::1/64
 ipv6 enable
 ipv6 dhcp-relay destination xxxx::2
 ipv6 dhcp-relay destination xxxx::3
 ipv6 dhcp-relay include-options interface-id remote-id
 ipv6 nd managed-config-flag
If you're going through the time to set this up from scratch as well, I would highly recommend using ISC Kea, which is replacing ISC-DHCP. you can also then set up ISC Stork which is a nice web UI for ISC-KEA clusters:

1661243887651.png
 

seatrope

New Member
Oct 5, 2018
27
10
3
Maine
www.ychng.com
there's no need to dick around with VRRP, fastiron (and almost every other enterprise switch) supports forwarding dhcp broadcasts to multiple IPs/DHCP servers directly, then the HA component of ISC-DHCP handles which one is active (replies to the forwarded broadcasts) and which one is standby (ignores them):

Code:
interface ve 10
ip address 192.168.1.1 255.255.255.0
ip helper-address 1 172.16.110.2
ip helper-address 2 172.16.110.3
ipv6 address xxxx::1/64
ipv6 enable
ipv6 dhcp-relay destination xxxx::2
ipv6 dhcp-relay destination xxxx::3
ipv6 dhcp-relay include-options interface-id remote-id
ipv6 nd managed-config-flag
If you're going through the time to set this up from scratch as well, I would highly recommend using ISC Kea, which is replacing ISC-DHCP. you can also then set up ISC Stork which is a nice web UI for ISC-KEA clusters:
Thanks @fohdeesha Jon, if I wasn't so fixated on having an integrated interface between pihole DNS and DHCP I would have gone down this road for sure. Looked into building kea DHCP for raspi and it was not straightforward (for me) either.. I don't want to depend on a VM for this. If VRRP/keepalived doesn't work I'll bite the bullet and go either Kea or ISC.
 
  • Like
Reactions: fohdeesha

beren

New Member
Oct 25, 2018
16
2
3
Hey @fohdeesha great guide! I just got my 6610. I was thinking of splunking for old licenses on the flash just because, and wondered if upping the baudrate in the bootloader would make it take less time. Anyone ever tried that?
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,577
2,775
113
31
fohdeesha.com
Hey @fohdeesha great guide! I just got my 6610. I was thinking of splunking for old licenses on the flash just because, and wondered if upping the baudrate in the bootloader would make it take less time. Anyone ever tried that?
if you can figure out the hidden var to actually do that go for it lmao
 

beren

New Member
Oct 25, 2018
16
2
3
if you can figure out the hidden var to actually do that go for it lmao
damn you're right. just got home with my serial cable. I thought you were the hacker! :p

BTW what's the current recommended JTAG? Seems BDI2000 doesn't exist anymore.
 
Last edited:

Vatharian

New Member
Aug 25, 2022
1
0
1
Poland
Hello!
I am the reddit person with said failed switch. I have just registered to reply here.

Original post:

A user on Reddit posted an interesting problem with his ICX6610-24 (non PoE). He said his switch idles at 180-200W and when under load, can hit 400W. I told him this makes no sense given that the spec sheet indicates it only requires a single 250W power supply. The specs say the second power supply is optional, for redundancy. Also, this thread indicates it should idle at 80W or so.
[...]
Based on this information, does the switch appear to be genuine? Why would the power supply show a blank model number with no revision?
these switches were never counterfeited (that I'm aware of), so it's certainly genuine, but it sounds like it had a pretty serious fault to begin with. if a 250w PSU is pulling 400w, I'm assuming the PSU(s) themselves had a pretty bad fault, which could also explain why their manuf data EEPROM couldn't be read (reporting all FFF)
I assume you are right on every account. They HAVE been tampered with.

also, if his switches were really pulling 200w or above like he states, the fans would NOT be on fan speed 1 still as his output indicates - that is an insane amount of heat for these fans to exhaust out of 1ru, they would have ramped up to fan speed 2. How is he measuring power draw?
I asked him the same question. He said he used this meter -https://www.amazon.com/Digital-Wattmeter-Consumption-Frequency-Electricity/dp/B0828QWQZP

He noted he’s using a 230V Shucko plug (Poland).

It didn’t make sense to me that the switch would really pull that much power so I told him to test by running with a single power supply. He said it wouldn’t boot in that configuration.
I linked the power meter very closely resembling the one I have, as I couldn't find the my Poland/Germany only model on US Amazon. I also have DIN rail mounted current meter.

He says he has three of the 250W power supplies. Could all be defective?
if he's tried three separate supplies then there's a 99% chance the failure mode is inside the switch itself. especially if it only boots with two power supplies
So, it sounds like the switch itself is defective, and his best bet is simply to buy a new ICX6610-24. Is there any chance old firmware or a bad configuration could cause these issues? He’s looking at paying over $200 in shipping to buy a new switch from the US due to limited availability in Poland.
if he's tried 3 different PSUs and the switch always shows this behavior I'm not sure what else it could be. no way a config or firmware issue is going to make it draw 3x it's rated power
After reading the replies out there and looking into all the photos and other threads I could find of the switch and its insides, I came to conclusion, that I either "scored" engineering sample or qualification sample or got a result of some not-exactly-professional-but-still-somewhat-competent person frankensteining this device from scrapped or stolen lot.

It can be either, since it's impossible to modify fan curve, and knowing devices like that usually turn into "survival mode" (fan speed to max) when some sensors are off, it behaved surprisingly well, suggesting either someone found a way to alter it, to fool the sensors or it came with pre-release fw or client-targeted one.

I should have opened it right after receiving, but I only did coursory look to see of there are no loose components or screws, threw cables in and it worked, so I left it as it is. Only thing I did was swap the PSUs around. Switch and extra PSU were bought from same place (an eBay auction, and it was private person as far as I remember).

Using this photo:
as reference, I found:

- switch has no serial number, neither on the device itself or on the motherboard, and all stickers were removed. Can't say anything about the firmware now, but you saw the zeros. Only markings I could find are laser etched "AR2054-01-011" between CPU and stack connector card and "Brocade MV1194V-0 / AB 1 026-3" under the card;
- memory stick was kaptoned to the slot from all sides and all over, and I have Smart sg57a648bro535y1sj EP2-5300c-555-13-zz. This is the only thing that has intact serial number in whole device, but I didn't find anything about this particular stick.
- all PSUs had a sticker with revision saying S5, where there would be A, B or C originally, underneath it the original rev has been scratched off. Stickers with QR and s/n are missing. They all bear marks of being opened multiple times (a lot of scratches around the screws and tabs were clearly abused). I missed this since for some time I used to handle device scrapping at my workplace so I got resistant to noticing scratches.
- I went over the motherboard with good light and found solder flux residue around almost all power components;
- almost all of electrolytic capacitors are random brands;
- Boot flash chip has "fused!" handwritten with a marker pen and flux residue around its legs; This probably explains the bricking and null s/n.
- Battery socket, after removing the battery shows signs of cleaned up corrosion (I don't ever recall seeing Lithium battery leak!); Current battery reads 2.9 V.
- headers J2, J10 and U6 had been clearly removed;
- place where POE headers go in PoE-equipped models were clearly soldered on and cleaned up;
- One of the fan tray connectors on the board had mangled pins and its mounting screw was held in place by copious amount of Locktite or similar glue. I had to wrestle it off. Fan modules look okay. There is a trace of s/n stickers being removed from them on the inside between the fans.

One thing that stands to me now, is that I never noticed the switch kicking up the fans to speeds anywhere close those heard during boot sequence, they did went up but not much - but at the same time it really did pull that much power off the wall. On idle the exhaust was moderately warm to very warm, but I wouldn't describe it as hot, like for example Dell R640 going full tilt can get, and with max traffic I could put on it it got really, really hot, enough to make keeping hand in the airstream very uncomfortable to painful.

I should also note that I misread the specifications! I assumed that both PSUs work in unison and share power, and to boot it only from one I need higher tier, 750 ow 1000 W ones. I am used to moving around 2 kW+ switches at work, so this is really why I paid no mind to inability of my device to turn on with one and the power consumption.

With all of that, and switch now bricked, even if I managed to find someone actually competent who would repair it I don't exactly feel safe putting it back into my homelab, little late perhaps, but oh well.

If anything I consider this to be a warning not to blindly trust enterprise gear. It never dawned on me someone would just go over a device like this to fix it. It's not that big blow financially, these switches go under $100 routinely, and I have other 10G gear, 40G was very cool addition, but I can live without it.
 

seatrope

New Member
Oct 5, 2018
27
10
3
Maine
www.ychng.com
Yes, that configuration should work, although you will need to be certain to configure the DHCP server instance to use the VIP as their identity address, so that unicast replies from them will use the proper source address.
Seems to be working well now. HA piholes/DHCP/unbound DNS with keepalived VIP. 6610 iphelper is pointed only to the VIP address and there are no other hosts in the small subnet that the piholes are in.

Tested failover and worked well.

Thanks for the help, all!
 

LemonheadST

New Member
Aug 25, 2022
5
2
3
Hello,

I'm having an odd problem with the rear 10gb ports on my ICX6610 (1/2/2-1/2/5 & 1/2/7-1/2/10). I'm attempting to connect 4 of these ports to a host (using a breakout cable), and bonding them using 802.3ad LACP.

On a fresh boot of the switch I can get this fully working on either QSFP+ port, all ports are up, and it works for a time. However, after several reboots of the other device (a Honeycomb LX2K), most or all of the 10gbe interfaces on the QSFP+ port just stop passing traffic, 'show lag' reports that they are LACP-BLOCKED.

The problem persists when I remove the ports from the trunk - they just won't pass traffic anymore even as standalone ports. Yes I did re-enable them after running 'no lag' :). I can connect the port to a completely different interface or machine and it still won't pass traffic. I even tried a different breakout cable - no luck there either. There is nothing unusual about the interface when running 'show interface' - it's shows enabled at 10gbe and in a FORWARDING state.
I can move the QSFP+ module to the other 4x10gb port and those will work fine for a time, but eventually these ports get the same issue.
So far the only 'fix' for the issue is to completely reload the switch. After a reload, all of the ports function normally, for a time.
Has anyone run into a similar issue with the rear 10gbe ports? If so, is there a way to prevent this? Or at least get them working without a switch reload?

10GigabitEthernet 1/2/7 is up, line protocol is up
Port up for 1 hour(s) 18 minute(s) 37 second(s)
Hardware is 10GigabitEthernet , address is [removed]
Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx
Configured mdi mode AUTO, actual none
Member of 8 L2 VLANs, port is dual mode in Vlan 1, port state is FORWARDING
BPDU guard is Disabled, ROOT protect is Disabled, Designated protect is Disabled
Link Error Dampening is Disabled
STP configured to ON, priority is level0, mac-learning is enabled
Openflow is Disabled, Openflow Hybrid mode is Disabled, Flow Control is config enabled, oper enabled, negotiation disabled
Mirror disabled, Monitor disabled
Mac-notification is disabled
Not member of any active trunks
Not member of any configured trunks
No port name
MTU 10200 bytes, encapsulation ethernet
300 second input rate: 0 bits/sec, 0 packets/sec, 0.00% utilization
300 second output rate: 5080 bits/sec, 5 packets/sec, 0.00% utilization
7093 packets input, 851519 bytes, 0 no buffer
Received 824 broadcasts, 6246 multicasts, 23 unicasts
0 input errors, 0 CRC, 0 frame, 0 ignored
0 runts, 0 giants
330955 packets output, 36155642 bytes, 0 underruns
Transmitted 79531 broadcasts, 248621 multicasts, 2803 unicasts
0 output errors, 0 collisions
Relay Agent Information option: Disabled
 
Last edited:

fohdeesha

Kaini Industries
Nov 20, 2016
2,577
2,775
113
31
fohdeesha.com
damn you're right. just got home with my serial cable. I thought you were the hacker! :p

BTW what's the current recommended JTAG? Seems BDI2000 doesn't exist anymore.
A while back I scoured through the bootloader binary and couldn't find any evidence whatsoever of an adjustable baudrate so I'm pretty sure it's stuck on 9600. The baud is easily changeable however on the newer switches that run u-boot (because it's just u-boot, use the baudrate env variable)

as for jtag, that's your only option. Not many PowerPC JTAGs existed for this particular generation, the bdi2000 and the bdi3000 were the only real models and the bdi3000 is even more rare/expensive. There were also a couple Freescale Codewarrior models but they're useless without the (very expensive) CW software package
 
  • Like
Reactions: nedimzukic2

fohdeesha

Kaini Industries
Nov 20, 2016
2,577
2,775
113
31
fohdeesha.com
Hello!
I am the reddit person with said failed switch. I have just registered to reply here.

Original post:




I assume you are right on every account. They HAVE been tampered with.



I linked the power meter very closely resembling the one I have, as I couldn't find the my Poland/Germany only model on US Amazon. I also have DIN rail mounted current meter.






After reading the replies out there and looking into all the photos and other threads I could find of the switch and its insides, I came to conclusion, that I either "scored" engineering sample or qualification sample or got a result of some not-exactly-professional-but-still-somewhat-competent person frankensteining this device from scrapped or stolen lot.

It can be either, since it's impossible to modify fan curve, and knowing devices like that usually turn into "survival mode" (fan speed to max) when some sensors are off, it behaved surprisingly well, suggesting either someone found a way to alter it, to fool the sensors or it came with pre-release fw or client-targeted one.

I should have opened it right after receiving, but I only did coursory look to see of there are no loose components or screws, threw cables in and it worked, so I left it as it is. Only thing I did was swap the PSUs around. Switch and extra PSU were bought from same place (an eBay auction, and it was private person as far as I remember).

Using this photo:
as reference, I found:

- switch has no serial number, neither on the device itself or on the motherboard, and all stickers were removed. Can't say anything about the firmware now, but you saw the zeros. Only markings I could find are laser etched "AR2054-01-011" between CPU and stack connector card and "Brocade MV1194V-0 / AB 1 026-3" under the card;
- memory stick was kaptoned to the slot from all sides and all over, and I have Smart sg57a648bro535y1sj EP2-5300c-555-13-zz. This is the only thing that has intact serial number in whole device, but I didn't find anything about this particular stick.
- all PSUs had a sticker with revision saying S5, where there would be A, B or C originally, underneath it the original rev has been scratched off. Stickers with QR and s/n are missing. They all bear marks of being opened multiple times (a lot of scratches around the screws and tabs were clearly abused). I missed this since for some time I used to handle device scrapping at my workplace so I got resistant to noticing scratches.
- I went over the motherboard with good light and found solder flux residue around almost all power components;
- almost all of electrolytic capacitors are random brands;
- Boot flash chip has "fused!" handwritten with a marker pen and flux residue around its legs; This probably explains the bricking and null s/n.
- Battery socket, after removing the battery shows signs of cleaned up corrosion (I don't ever recall seeing Lithium battery leak!); Current battery reads 2.9 V.
- headers J2, J10 and U6 had been clearly removed;
- place where POE headers go in PoE-equipped models were clearly soldered on and cleaned up;
- One of the fan tray connectors on the board had mangled pins and its mounting screw was held in place by copious amount of Locktite or similar glue. I had to wrestle it off. Fan modules look okay. There is a trace of s/n stickers being removed from them on the inside between the fans.

One thing that stands to me now, is that I never noticed the switch kicking up the fans to speeds anywhere close those heard during boot sequence, they did went up but not much - but at the same time it really did pull that much power off the wall. On idle the exhaust was moderately warm to very warm, but I wouldn't describe it as hot, like for example Dell R640 going full tilt can get, and with max traffic I could put on it it got really, really hot, enough to make keeping hand in the airstream very uncomfortable to painful.

I should also note that I misread the specifications! I assumed that both PSUs work in unison and share power, and to boot it only from one I need higher tier, 750 ow 1000 W ones. I am used to moving around 2 kW+ switches at work, so this is really why I paid no mind to inability of my device to turn on with one and the power consumption.

With all of that, and switch now bricked, even if I managed to find someone actually competent who would repair it I don't exactly feel safe putting it back into my homelab, little late perhaps, but oh well.

If anything I consider this to be a warning not to blindly trust enterprise gear. It never dawned on me someone would just go over a device like this to fix it. It's not that big blow financially, these switches go under $100 routinely, and I have other 10G gear, 40G was very cool addition, but I can live without it.
holy shit what the hell lol. Can you post pictures of this monstrosity? Also your assumption is correct, they only need one PSU to run. In fact, that's how 90% of them were configured/sold. You can use any wattage too, you can boot a PoE version with the non-PoE small 250w supply (PoE just won't be enabled)
 

beren

New Member
Oct 25, 2018
16
2
3
Sorry if this was answered in this monster thread or somewhere else, but I just got my 6610 with dual rev A power bricks. Would it be quieter to remove one and block it vs leaving it unplugged?

Also was thinking using the 40G breakout with DAC might be simpler than front ports, would I lose any diag info going with a non-brocade programmed cable? I know with optics what you lose but never found dac info.
 

heromode

Active Member
May 25, 2020
139
83
28
So this was with 3x Arctic 4028-6K fans installed:

Code:
show chassis
The stack unit 1 chassis info:

Power supply 1 (NA - AC - PoE) present, status ok
Power supply 2 not present
Power supply 3 not present

Fan 1 ok, speed (auto): [[1]]<->2
Fan 2 ok, speed (auto): [[1]]<->2
Fan 3 ok, speed (auto): [[1]]<->2

Fan controlled temperature: 63.5 deg-C

Fan speed switching temperature thresholds:
                Speed 1: NM<----->65       deg-C
                Speed 2:       56<-----> 79 deg-C (shutdown)

Sensor B Temperature Readings:
        Current temperature : 56.5 deg-C
Sensor A Temperature Readings:
        Current temperature : 63.5 deg-C
        Warning level.......: 69.0 deg-C
        Shutdown level......: 79.0 deg-C
Obviously the 6K rpm Arctic's are not sufficient, at 4.5 volts calculating i presume they are spinning at 2250rpm. ((4.5/12)*6000). That's not much for a 40mm fan, and one can barely feel any airflow.

So this is the result of my next try:

fan_side.jpginternal_fans.jpgfan_top.jpg

3 cheap chinese wide voltage range (3V - 12V) fans connected to the fan headers using split cables. The fans are 40mm x 40mm x 10mm. (the distance from the top of the ASIC heatsinks to the top of the switch is exactly 20mm, so 10mm thick fans are the only option)

The fans are easy to attach without any screws using two small zip ties that fit between the heatsink fins, preventing the zip-tie to slide out over the edge, and locking them using only the lock part from two additional zip-ties. The fans at 4.5 volts are completely silent and can't be heard outside the case.

Clearly, now instead of the heat concentrating around the ASIC's, it's spread evenly across the whole case: note the below results are from a room that is 29 degrees celcius ambient (been the hottest august in 60 years here)

Code:
show chassis
The stack unit 1 chassis info:

Power supply 1 (NA - AC - PoE) present, status ok
Power supply 2 not present
Power supply 3 not present

Fan 1 ok, speed (auto): [[1]]<->2
Fan 2 ok, speed (auto): [[1]]<->2
Fan 3 ok, speed (auto): [[1]]<->2

Fan controlled temperature: 57.5 deg-C

Fan speed switching temperature thresholds:
                Speed 1: NM<----->65       deg-C
                Speed 2:       56<-----> 79 deg-C (shutdown)

Sensor B Temperature Readings:
        Current temperature : 57.5 deg-C
Sensor A Temperature Readings:
        Current temperature : 57.5 deg-C
        Warning level.......: 69.0 deg-C
        Shutdown level......: 79.0 deg-C
All the readings are exactly the same, 57.5 degrees. The upside of this setup is the switch is silent, and no ASIC is close to triggering a higher speed even in a 29 degree room. And should the temp go above 65 degrees, all 6 fans will spin up.

Basically, since the Arctic's move so little air out from behind, but the heat is spread evenly across the case, the top cover of the case now effectively works as a heatsink. It also works as a radiator, heating my small computer room. This is not an issue during winter, but during summer it is.

I'm sure it's been mentioned in the thread many times, and i plan to go through all 400 pages once i start configuring the switch, but please remind me:

Can i remove the POE board, and reduce power consumption/heat?
Can i reduce power consumption by disabling ports?

Initially i tried to order the 24 port model from that dutch dealer in the Great deals forum, but he had just sold the last one to a guy paying with paypal while my bank transfer was still pending, so i got a 48 port version for the same price. i'd still love to be able to shave about 10W from idle consumption on this.
 
Last edited:
  • Like
Reactions: nedimzukic2

IceBrew

New Member
Aug 29, 2022
2
0
1
Hi all,

I've been following this post for a few years at this point waiting for a good point to dive in, mainly concerned with power consumption but think I'm ready to bite the bullet. Because we're slowly creeping towards 2023 though I have to ask:

Are these switches still a good value buy in 2022 / 2023?

Of course you're getting up to 48 PoE ports with some 10G ports thrown in for $100 delivered in some cases, but from what I can see the 6450 in particular was EOL in 2018. So we're buying a 5+ year old switch. My main concern is what's the actual life expectancy of network switches? Searching online just gives the "replace your gear every 5 years" but doesn't give any indication of "They start failing after X years or time on".

Can I buy one now in the hopes it'll last another 5 years? My use case is dropping by the day to the point where for now I could get away with less than 10 ports and it'll be a fair few years before that amount would need to increase so I want to be happy with the investment rather than spending the extra $100 for a brand new switch with admittedly less ports, but with a much longer life expectancy, lower power draw and newer features (but admittedly no 10G). What's peoples thoughts? Thanks!
 

i386

Well-Known Member
Mar 18, 2016
3,373
1,124
113
33
Germany
Are these switches still a good value buy in 2022 / 2023?
I'm struggling with the same question. But after separating "want"s and "need"s the answer is yes.
- high quality
- 1gbe is (still) fast enough to stream multiple 4k streams (I mean remuxed UHD disc quality, not the netflix stuff :p)
- good documentation, the official and the in official by fohdeesha (I always mistype that name ._.) and other forum members
- free firmware updates (you need a ruckus account)
 
  • Like
Reactions: IceBrew

IceBrew

New Member
Aug 29, 2022
2
0
1
I'm struggling with the same question. But after separating "want"s and "need"s the answer is yes.
- high quality
- 1gbe is (still) fast enough to stream multiple 4k streams (I mean remuxed UHD disc quality, not the netflix stuff :p)
- good documentation, the official and the in official by fohdeesha (I always mistype that name ._.) and other forum members
- free firmware updates (you need a ruckus account)
Thanks for the thoughts, it's very hard to argue with $100 that's for sure and if it lasts 5 years I'd consider that amazing value so long as the running costs aren't drastically higher. Hopefully the one I order doesn't end up being the beat up one that's been abused.

And for sure documentation is king in this thread. Will be attempting to swap out for quieter fans as it'll go in a rack under a desk (attempting to rack mount my main rig and my old rig (as a server) too, so hopefully the noise and heat are palatable.
 

safrax

New Member
Jun 21, 2020
8
1
3
So FiOS finally turned on IPv6 to my house within the last few weeks. I've been trying to get it to propagate throughout my network but I'm a bit stumped. I have the following (simplified) setup:

Cisco network diagram.png
I have an OPNSense firewall sitting between a FiOS ONT and my ICX-7250. The ICX-7250 and Firewall are configured with a 2-port LAG acting as a transit network. I have a few different VLANs on the 7250, VLAN1-Deprecated, VLAN99-Transit, VLAN101-Data, VLAN102-Servers, VLAN103-WiFi, VLAN104-IoT. The ICX-7250 is currently acting as the DHCP Server for all VLANs except 1 and 99. I have set the following IPv6 related configuration options in the ICX-7250:

ver 08.0.95T213
ipv6 dhcp-relay accept-broadcast
ipv6 unicast-routing
ipv6 router ospf
ipv6 neighbor inspection vlan 1
ipv6 neighbor inspection vlan 99
ipv6 neighbor inspection vlan 101 to 104
interface lag 1
dhcp6 snooping trust
ipv6-neighbor inspection trust


The OPNSense firewall has a rule to allow DHCP traffic to pass to the upstream DHCP server. I've also configured the appropriate IPv6 Tracking options for the LAG interface in OPNSense.

I'm seeing OPNSense make router advertisements on the LAG but a server downstream sitting on VLAN102 isn't seeing them. At this point I'm not sure what else I need to do. I read through the documentation on Brocade's website and to some extent it was over my head. I'm a sysadmin that does Linux, I'm not a network person.

As for why I went with a transit network over letting the OPNSense firewall do all the routing: performance. The OPNSense machine is more than capable of handling FiOS' ~1Gbps speeds, but I'm fairly certain it'd fall over if I threw a 10Gbps NIC in it and asked it to route as well. If there's a way to get rid of the transit network, I'm all ears, I feel like it causes me more problems than it's worth.
 

safrax

New Member
Jun 21, 2020
8
1
3
Part of why I'm doing this is a learning exercise. The other part is I'm trying to figure out whether or not I can solve some issues I've been experiencing with NAT and some games. I don't actually have very high hopes that the games support IPv6, but on the off chance they do it could alleviate some spouse related annoyances.

That said, thank you for the article. It was informative and I do agree IPv6 is fairly insane. It's the best we currently have to help alleviate some of the IPv4 related issues and it would have been nice if the IETF had designed something else , with lessons learned, 10-15 years ago.
 
  • Like
Reactions: heromode