Drag to reposition cover

Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ryan Haver

New Member
Apr 7, 2020
8
1
3
I am seeing some really weird behavior when using two Mellanox MCX4131A-GCAT ConnectX-4 adapters connected to my Brocade ICX6610.

100% Reproducible behavior
  1. When using ConnectX-4 adapters plugged into both QSFP+ ports (1/2/1 and 1/2/6) the ConnectX-4 adapters exhibit a massive amount of packet loss.
  2. When one QSFP+ port is disabled then the adapter plugged into the other port no longer experiences packet loss.
  3. When using a Mellanox MCX313A-BCBT ConnectX-3 in one port and a Mellanox MCX4131A-GCAT ConnectX-4 in the other port there are no issues with packet loss.
Any help here is appreciated, as this doesn't make any sense to me. For the sake of troubleshooting, I'm going to test a turning off a few features within the ConnectX-4 adapter firmware like SR-IOV.


Edit: After swapping back to using both of the ConnectX-4 adapters this is no longer an issue....really weird.
 
Last edited:

Jason Antes

Active Member
Feb 28, 2020
226
76
28
Twin Cities
Those are optimized for high static pressure... they draw a lot of power and make A LOT of noise at full tilt. Watch out for your ears ;)

EDIT: actually, it's written on the label of the rear one: 1.82A @12V, or if you prefer ~20ishW EACH!
Yeah, these are for older Gen7ish Proliants. I'd look for newer Gen9/10 as they are much quieter. Of course, they only spin as fast as you allow them too by the power you feed them. Not hat it makes it any better, but for the power you are feeding more than 1 fan. ;)

I had gotten some of these as I was collecting them for a friend who wanted to use them to power his RC planes. I had more than he wanted at the time so I kept these.
 

tommybackeast

Active Member
Jun 10, 2018
286
105
43
Nope, nothing by default. It does support lldp and cdp for device discovery, and with those on your cable modem will see the mac address of your switch and get confused, which I'm sure is what happened with your nortel (cable modem should only be seeing 1 mac, the mac of your router). neither of these things are on by default however so after following my guide (and make a separate vlan for wan) it will work out of the box
ICX7250 firmware question :

I see your download page continues to list 8080 (for ICX7250). - this is the firmware my ICX 7250-48p is using.

I also have a Ruckus r510 AP: and from some reading, seems there's a single pane of glass for ICX7250 on 8090 firmware used in conjuction with the Ruckus r510AP.

Gently asking : are there problems using 8090 firmware with ICX 7250 ? thanks ...
 

tommybackeast

Active Member
Jun 10, 2018
286
105
43
just switch the fan direction around and monitor temps, I have a feeling it'll be fine. If I remember right somewhere here with a 7250 swapped the fann direction and actually saw slightly cooler temps but I could be misremembering
FYI: my ICX 7250-48p is rear mounted in an enclosed server rack; and I never reversed the fans and temps are just fine. I'm only using two 1G ports for POE; and am using 6 10GB ports + 20-25? 1GB non-POE ports
 

EngineerNate

Member
Jun 3, 2017
68
16
8
35
Seller sent me a non-PoE unit. :mad:

My ToR switch needs to be PoE but with a partial refund, the price was good enough with the rthat I'll hang onto it for now to either use elsewhere or sell, but I'm back on the market for myself.

I think I'll look for a 6610-24p... I didn't realize when I first purchased that the two non-breakout capable QSFP+ ports could be used to hook up to hypervisor servers at 40G. That's worth the ~30w to me.
 

hmw

Active Member
Apr 29, 2019
605
242
43
I think I'll look for a 6610-24p... I didn't realize when I first purchased that the two non-breakout capable QSFP+ ports could be used to hook up to hypervisor servers at 40G. That's worth the ~30w to me.
Yep - and it works beautifully. I have the ICX6610-24p connected at 40 GbE via a ConnectX-3 in my ESXi server. Again thanks to this forum, found a pair of 3m NetApp DAC cables for $9

Waiting for ConnectX-4 LX prices to go down so I can use SR-IOV. And maybe the next step is to build out vSAN ...
 

EngineerNate

Member
Jun 3, 2017
68
16
8
35
What exactly is the difference between PoE and non-PoE in terms of internals? Is it just the PSUs? Or there's an internal PoE board as well?
I'd guess there's a poe board. I know there is in some of the other ICX servers from reading this thread.

In better news, I found one of those sub $200 7150-C12Ps so my upstairs networking closet is sorted.

Can't decide now whether to homerun the upstairs access point or run it off the switch. The switch was originally going to be for other stuff, but if I have 10G up to the closet up there... Not a lot of downside to power it locally vs from the basement and it would be one less cat6 run to clog up the conduit.
 

safrax

New Member
Jun 21, 2020
8
1
3
Is this normal for an ICX7250-24 with stock fan sitting in a room at about ~24C?

Fan controlled temperature: 64.5 deg-C

Fan speed switching temperature thresholds:
Speed 1: NM<----->98 deg-C
Speed 2: 67<----->105 deg-C (shutdown)

Fan 1 Air Flow Direction: Front to Back
Slot 1 Current Temperature: 65.0 deg-C (Sensor 1)
Slot 2 Current Temperature: NA
Warning level.......: 100.0 deg-C
Shutdown level......: 105.0 deg-C


My ICX 6610-48P sitting in my un-airconditioned garage is idling at about ~71C.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,797
3,194
113
33
fohdeesha.com
Is this normal for an ICX7250-24 with stock fan sitting in a room at about ~24C?

Fan controlled temperature: 64.5 deg-C

Fan speed switching temperature thresholds:
Speed 1: NM<----->98 deg-C
Speed 2: 67<----->105 deg-C (shutdown)

Fan 1 Air Flow Direction: Front to Back
Slot 1 Current Temperature: 65.0 deg-C (Sensor 1)
Slot 2 Current Temperature: NA
Warning level.......: 100.0 deg-C
Shutdown level......: 105.0 deg-C


My ICX 6610-48P sitting in my un-airconditioned garage is idling at about ~71C.
very normal
 

infoMatt

Active Member
Apr 16, 2019
222
100
43
Are there any concerns I should be aware of if I move it out to my garage which at the top end is about ~29C,90F and somewhat humid?
If you put the unit in an environment 5 degrees hotter, its internal temp would also rise by 5°, reaching 70°C, giving you another 30°C or so of thermal headroom.
No worries, those are "older" process silicon, they are meant to run "hot". As you've seen, it wouldn't ramp up the fans until 98°C.

As for the humidity, those aren't exactly rugged units, but if it isn't condensing-on-the-walls damp, it should survive, but it could suffer of premature oxidation of the contacts inside the RJ45 or on the metal case. If it's human-bearable, it shouldn't be a problem.
 

EngineerNate

Member
Jun 3, 2017
68
16
8
35
Just scored a 6610-24P -I with two rev B power supplies, two fans, rack ears, and power cables included. Price was a bit above average but I figure for the full compliment of power supplies/fans, rack ears, and the PSUs being rev B it was worth it.

What QSFP+ pci-e cards is everyone using for the 40G connections? Any compatibility issues?
 

itronin

Well-Known Member
Nov 24, 2018
1,281
850
113
Denver, Colorado
What QSFP+ pci-e cards is everyone using for the 40G connections? Any compatibility issues?
I don't know about "everyone" but the very first post in this thread towards the bottom has a recommendation for an OEM HPE ethernet/IB card that can be flashed to stock Mellanox ConnectX 3 40Gbe. I have two in use and they work great and were inexpensive. I can verify the card works in Windows, FreeNAS, and ESXI 6.5 and 6.7.

Netapp QSFP cables were fine for short run interconnects where super flexible cable bends are not required. They are also inexpensive.
 
Last edited:
  • Like
Reactions: EngineerNate

hmw

Active Member
Apr 29, 2019
605
242
43
Just scored a 6610-24P -I with two rev B power supplies, two fans, rack ears, and power cables included. Price was a bit above average but I figure for the full compliment of power supplies/fans, rack ears, and the PSUs being rev B it was worth it.

What QSFP+ pci-e cards is everyone using for the 40G connections? Any compatibility issues?
Preflashed ConnectX-3: HP 544QSFP MCX354A-FCBT 649281-B21 656089-001 VPI FDR 40GbE Mellanox OEMFirmware | eBay

DAC cable: Original NetApp 112-00178 X6559-R6 External SAS QSFP+ TO QSFP+ QSFP Cable 5m | eBay

The ConnectX-3 will work in ESXi 6.7 or 7 but will NOT do SR-IOV
 

hmw

Active Member
Apr 29, 2019
605
242
43
$40 is a lot more than $15 but might be worth it not to have 5m of cable coiled in my rack.
That link was for 5m - the 0.5m & 1m is like $18. fs.com shipping is ~ $12, so more like $30 in total

btw if you wanted 40 gbe with SR-IOV, here’s a single port ConnectX-4 LX on eBay: Mellanox MCX4131A-GCAT_C05 ConnectX-4 LX 50GbE PCIe PCI-E NIC Newest Firmware | eBay. It’s a lot more than $40 for the CX3 but the CX4 can offload more, supports RoCE and also supports multi host virtualization
 

EngineerNate

Member
Jun 3, 2017
68
16
8
35
Gotcha.

How important is SR-IOV? I've been running ConnectX-2 cards for the last few years for 10G, if that impacts anything.