Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

am45931472

Member
Feb 26, 2019
75
17
8
Thats a great question. I have been wondering that for a while however at that point why not just use the 4x 10GB module instead. maybe the single 40Gb is cheeper.
 

EngineerNate

Member
Jun 3, 2017
61
10
8
31
Thats a great question. I have been wondering that for a while however at that point why not just use the 4x 10GB module instead. maybe the single 40Gb is cheeper.
I was asking because a lot of used ones come with 2X of the 40G modules in the back. Breakout cables are cheaper than switching the modules if they're already there and capable.
 

EngineerNate

Member
Jun 3, 2017
61
10
8
31
So ruckus lists two skus that do breakout in the 40G ports on the 7450, so signs point to "it's possible". I don't know yet whether they're unobtanium, if they're $$$, or if it only works with that specific optic and not a more generic breakout DAC cable.

 

pr09

New Member
Jun 21, 2020
6
0
1
Do any of the ICX series support VEPA/hairpin switching/802.1Qbg? I'm guessing "no" given no mention of it in the manual, but maybe there's a hidden option? It's a simple enough change I'd hope newer firmware could support it.
 

tangofan

New Member
May 28, 2020
8
3
3
I've recently bought a used 7150-24P switch off eBay for my home setup and I'm excited to join the club as this is my first excursion into enterprise-grade switches. Since this is a used item, I want to make sure that everything works as it is supposed to.

This is what I've done/tested so far:
  • I've been able to perform the initial setup per @fohdeesha 's guide including an upgrade to the there listed router firmware version 08080e. (The PoE firmware was actually downloaded and installed automatically after upgrading the main firmware.)

  • For the license install I followed this post , since I have 4 SFP+ ports.

  • The 2 uplink ports, the 24 data ports and the 4SFP+ ports work fine with the SFP+ ports showing a 10G link.

  • All of the 24 data ports have working PoE.

Here are my questions:

  1. Is there anything else that I should do to validate correct functioning of the switch?

  2. I see that there are newer firmware versions available, with the latest being 08090d. Is 08080e still the recommended firmware version?

  3. As this is my first excursion into enterprise-grade gear, what is a good (and not too expensive) way to learn the features and the commands of the switch's firmware a bit more systematically? Right now I'm just googling commands, but it seems that this isn't the best way to get a bit of a firmer grasp on things.
Thanks in advance for any help and advice.
 

hmw

Active Member
Apr 29, 2019
235
77
28
I was using the ICX6610 40G QSFP+ stacking port with a ConnectX-3 via a NetApp 112-00177 cable - and it worked perfectly, with the card being able to connect at 40 GbE

I replaced the ConnectX-3 with a ConnectX-4 LX - specifically a MCX4131A-GCAT - and the card just won't enable the link when connected via the NetApp cable. I then tried connecting the card via a MAM1Q00A-QSA QSFP+ to SFP adapter to one of the front SFP ports - and it worked, although it connected at 10G (as expected)

Updated drivers under ESXi - no go.

The card obviously works - I was able to get the card to do SR-IOV under ESXi - with 3 VMs having a virtual copy of the card, and it all works fine.


Anyone used the ICX6610 with a ConnectX-4 LX? Did it work ? What cables have folks used?

There's a 'Mellanox compatible' 40G QSFP+ cable from FS - Mellanox MC2210130-002 40G QSFP+ DAC Cable - but Mellanox's own MCX4131 page recommends a QSFP28 cable for the MCX4131A - surely that would NOT work with the ICX6610 stacking ports?

Any advice would be welcome - dipping my toes in the 40GbE world and it's not always smooth sailing :)
 
Last edited:

PGlover

Active Member
Nov 8, 2014
470
55
28
54
Hello All,

I have been reading the STH article on "Building a Lab" which has generated some ideas on how to improve my Production environment at home.


As it relates to the networking portion of the article, it talks about having separate switches for Layer 3 (core/distribution) and Layer 2 (access). Currently I have a collapsed core/distribution/access using the Brocade ICX-6610 in a stack configuration.

If I followed the recommendation in the article and use the Brocade ICX-6610 in a stack configuration for my Layer 2 switches, what would be a good core/distribution switches (in a stack configuration) to use for my core/distribution layer?

FYI.. I do need 10G at Layer 2 for connectivity to my physical VMWare and Storage Servers.
 

kapone

Well-Known Member
May 23, 2015
796
388
63
Separate switches for core/access are mostly related to scale. If you need 1000 access switch ports, you don't want to scale your core switch(es) to that level, as it'll be more expensive. In a situation like that, access switches make sense.

The kind of stuff we do...collapsing everything into core switch(es) is completely practical. We aint runnin AWS from our garage, are we? :)
 

PGlover

Active Member
Nov 8, 2014
470
55
28
54
Separate switches for core/access are mostly related to scale. If you need 1000 access switch ports, you don't want to scale your core switch(es) to that level, as it'll be more expensive. In a situation like that, access switches make sense.

The kind of stuff we do...collapsing everything into core switch(es) is completely practical. We aint runnin AWS from our garage, are we? :)
Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.

Just wanted to conform to best practice based on the article.
 
  • Like
Reactions: Jason Antes

kapone

Well-Known Member
May 23, 2015
796
388
63
Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.

Just wanted to conform to best practice based on the article.
I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons? :)

My business can certainly withstand the downtime it'd take to replace the switch with the cold standby (the configuration is backed up automatically, so it is a simple matter of de-racking/racking and connecting cables). Switches rarely die.

If your business absolutely must have the uptime, then yes, a stack is certainly a must.
 

PGlover

Active Member
Nov 8, 2014
470
55
28
54
I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons? :)

My business can certainly withstand the downtime it'd take to replace the switch with the cold standby (the configuration is backed up automatically, so it is a simple matter of de-racking/racking and connecting cables). Switches rarely die.

If your business absolutely must have the uptime, then yes, a stack is certainly a must.
Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.

What are you using in your compute and storage stack? Are you using a firewall like pfSense as well?
 

kapone

Well-Known Member
May 23, 2015
796
388
63
Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.

What are you using in your compute and storage stack? Are you using a firewall like pfSense as well?
pfSense = yup.

Funny, you bring it up...:) I'm actually in the middle of reconfiguring my network/stack as we speak. Mostly related to consolidation and segregation. And I wasn't running a physical DC up until now, which creates some interesting DNS issues in a virtualized environment... Anyway, my new "stack" will be as follows:

- A physical domain controller
- ICX-6610 as the core/everything switch (with specific DNS things tied to the DC, in terms of using hostnames in ACLs and what not)
- A pair of bare metal Windows servers with Starwind in an HA config as the SAN - Everything below this runs off of the SAN, no local storage at all)
- A lightweight virtual server to host the core things (pfsense, nginx, second DC, WSUS, vCenter etc etc)
- As many compute servers as I need. Currently at 7, but will be consolidating them into 3-ish in the next few weeks.

I'm still vacillating between keeping pfSense virtual or move it to bare metal. I have no performance issues with it being virtual pre se.
 

Bjorn Smith

Active Member
Sep 3, 2019
264
107
43
Anyone here that can give me "real" power usage on the ICX 6610? - hopefully the 80W quoted on the front page is max load with all ports going full tilt?

Regards
 

Bjorn Smith

Active Member
Sep 3, 2019
264
107
43
So if I want to exchange my ICX-6450-24 for something that has:

8-12 QSFP+/SFP28 ports (40Gbps min)
12+ RJ45 1 or 10Gbps ports

Is there an ICX or something else that might fit which does not use 80w idling?

Right now I have my ICX6450+Mellanox SX6018 and combined they use around 60w idling/low usage which is acceptable, but I would love to get one switch for it all, just to get rid of some cables for uplinks between the switches.
 

PGlover

Active Member
Nov 8, 2014
470
55
28
54
pfSense = yup.

Funny, you bring it up...:) I'm actually in the middle of reconfiguring my network/stack as we speak. Mostly related to consolidation and segregation. And I wasn't running a physical DC up until now, which creates some interesting DNS issues in a virtualized environment... Anyway, my new "stack" will be as follows:

- A physical domain controller
- ICX-6610 as the core/everything switch (with specific DNS things tied to the DC, in terms of using hostnames in ACLs and what not)
- A pair of bare metal Windows servers with Starwind in an HA config as the SAN - Everything below this runs off of the SAN, no local storage at all)
- A lightweight virtual server to host the core things (pfsense, nginx, second DC, WSUS, vCenter etc etc)
- As many compute servers as I need. Currently at 7, but will be consolidating them into 3-ish in the next few weeks.

I'm still vacillating between keeping pfSense virtual or move it to bare metal. I have no performance issues with it being virtual pre se.
Are you using the free version of Starwind? I used Starwind 7 or 8 years ago..