I have some backend work to do before I do this & I don't know when I'm going to get the time.@RobstarUSA - Did you ever get around to trying to setup mDNS/Bonjour ? I am interested in the outcome of this as well. It's quite surprising there isn't a good guide here on how to set this up properly on brocade switches. (working reliably, I mean ). Just like @nickf1227 , I can't seem to get it to work. I am however, much less knowledgeable in the space than he his. I have an ICX7250 which is working fine except this issue of mDNS/Bonjour traffic on my network. I won't mind switching to another brand just to get it to work .
From the Cisco DNA service for Bonjour documentation :
Is there an equivalent for Brocade/Rukus switches?
I'm at a lose. The front 1/3/x SFP+ ports work perfectly with fiber transceivers (Brocade and others) at 10Gb. I've tried three different 10Gb copper transceivers, including the one above and they never power up. Always says network cable unplugged.either of these work fine
Brocade 10G-SFPP-T Compatible 10GBASE-T SFP+ Copper RJ-45 30m Transceiver Module
I'm at a lose. The front 1/3/x SFP+ ports work perfectly with fiber transceivers (Brocade and others) at 10Gb. I've tried three different 10Gb copper transceivers, including the one above and they never power up. Always says network cable unplugged.
Could I have something configured wrong that would disable copper transceivers? Or is it the Intel X540 on the other end of the cable? If I plug the X540 into one of the standard front ports (1/1/x), everything works perfectly fine. Just at 1Gb/s...
Push comes to shove, I could probably run another fiber run, but I know the copper cables work perfectly fine at 10GB from my old Aruba S3500s.
Of course, if I DID run another fiber cable, then I could use one of my 40Gb cards instead of the X540...
Either way, any ideas are appreciated!
Hmm, have you followed the setup and licensing guide for the 6610 to actually unlock the 10gbe ports? And followed the part of the guide that includes setting int 1/3/1 - 1/3/8 to speed 10g? I'd test a regular 10gb fiber link to something to rule something weird out firstThe x540 doesn't use SFP+ modules. The transceivers are built in. I've used it at 10Gb with my old Aruba S3500 with no issues. And using the exact same module from the Aruba in the ICX 6610 1/3/x slots (tried multiple slots), it won't connect.
I realized I hadn't tested the cable from end to end yet, so I was hoping that would be a fail, but the cable tester came back with a pass and verified I am using the correct cable at both ends.
So do they make any QSFP transceivers for copper 10Gb? I have three of the dual port Mellanox cards gathering dust I could use if I could find a transceiver for my computers end.
Sorry, noticed it last night when i was at home. was browsing on my cell and didn't notice the link.
You are the man! As I expected, something obvious and trivial. I had enabled 10g on the front ports. Of unit 2 (rack). But never unit 1!Hmm, have you followed the setup and licensing guide for the 6610 to actually unlock the 10gbe ports? And followed the part of the guide that includes setting int 1/3/1 - 1/3/8 to speed 10g? I'd test a regular 10gb fiber link to something to rule something weird out first
Edit: just saw your first post you already tested the front ports at 10gbe with fiber already, I need to not browse while half asleep. Maybe see if you can find a short cable to test a link with, and/or reboot the 6610 once the copper module has been installed
802.3ad/LACP (the underlying protocol used for LAGs) has a limited set of path-assignment algorithms; feel free to look them up on Wikipedia or you favorite other resourceHello all,
A question regarding LAG on an ICX6450:
I've enabled LAG on a couple of ports on the switch, and on both NICs within Proxmox (box has 2x 1gbe connections, remote device I'm testing with has 1x 10gbe connection). Outbound (from 2x 1gbe to 1x 10gbe device) iperf with 2 threads results in 1.8GB/s, whereas inbound (1x 10gbe to the 2x 1gbe device) maxes at about 1.2GB/s. The more threads, the higher the throughput, but once I get it up to 6 or more threads, it hits the 1.8GB/s point (5 threads or less are under 1.8GB/s).
I understand that with LAG, the different connections could be assigned to one NIC or the other, but with 2 threads it's definitely using both ports (1.2GB/s vs. the single-thread speed of 0.9GB/s), but still not maxing out both NICs. Is it routing more than half of the packets through the port that is set as the "primary?" That's what it looks like may be happening. If so, is there a way to more-equally weight the ports, or configure it to be more of a round-robin back-and-forth type of thing?