Drag to reposition cover

Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tangofan

New Member
May 28, 2020
17
6
3
I've recently bought a used 7150-24P switch off eBay for my home setup and I'm excited to join the club as this is my first excursion into enterprise-grade switches. Since this is a used item, I want to make sure that everything works as it is supposed to.

This is what I've done/tested so far:
  • I've been able to perform the initial setup per @fohdeesha 's guide including an upgrade to the there listed router firmware version 08080e. (The PoE firmware was actually downloaded and installed automatically after upgrading the main firmware.)

  • For the license install I followed this post , since I have 4 SFP+ ports.

  • The 2 uplink ports, the 24 data ports and the 4SFP+ ports work fine with the SFP+ ports showing a 10G link.

  • All of the 24 data ports have working PoE.

Here are my questions:

  1. Is there anything else that I should do to validate correct functioning of the switch?

  2. I see that there are newer firmware versions available, with the latest being 08090d. Is 08080e still the recommended firmware version?

  3. As this is my first excursion into enterprise-grade gear, what is a good (and not too expensive) way to learn the features and the commands of the switch's firmware a bit more systematically? Right now I'm just googling commands, but it seems that this isn't the best way to get a bit of a firmer grasp on things.
Thanks in advance for any help and advice.
 

hmw

Active Member
Apr 29, 2019
570
226
43
I was using the ICX6610 40G QSFP+ stacking port with a ConnectX-3 via a NetApp 112-00177 cable - and it worked perfectly, with the card being able to connect at 40 GbE

I replaced the ConnectX-3 with a ConnectX-4 LX - specifically a MCX4131A-GCAT - and the card just won't enable the link when connected via the NetApp cable. I then tried connecting the card via a MAM1Q00A-QSA QSFP+ to SFP adapter to one of the front SFP ports - and it worked, although it connected at 10G (as expected)

Updated drivers under ESXi - no go.

The card obviously works - I was able to get the card to do SR-IOV under ESXi - with 3 VMs having a virtual copy of the card, and it all works fine.


Anyone used the ICX6610 with a ConnectX-4 LX? Did it work ? What cables have folks used?

There's a 'Mellanox compatible' 40G QSFP+ cable from FS - Mellanox MC2210130-002 40G QSFP+ DAC Cable - but Mellanox's own MCX4131 page recommends a QSFP28 cable for the MCX4131A - surely that would NOT work with the ICX6610 stacking ports?

Any advice would be welcome - dipping my toes in the 40GbE world and it's not always smooth sailing :)
 
Last edited:

PGlover

Active Member
Nov 8, 2014
499
64
28
57
Hello All,

I have been reading the STH article on "Building a Lab" which has generated some ideas on how to improve my Production environment at home.


As it relates to the networking portion of the article, it talks about having separate switches for Layer 3 (core/distribution) and Layer 2 (access). Currently I have a collapsed core/distribution/access using the Brocade ICX-6610 in a stack configuration.

If I followed the recommendation in the article and use the Brocade ICX-6610 in a stack configuration for my Layer 2 switches, what would be a good core/distribution switches (in a stack configuration) to use for my core/distribution layer?

FYI.. I do need 10G at Layer 2 for connectivity to my physical VMWare and Storage Servers.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Separate switches for core/access are mostly related to scale. If you need 1000 access switch ports, you don't want to scale your core switch(es) to that level, as it'll be more expensive. In a situation like that, access switches make sense.

The kind of stuff we do...collapsing everything into core switch(es) is completely practical. We aint runnin AWS from our garage, are we? :)
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
Separate switches for core/access are mostly related to scale. If you need 1000 access switch ports, you don't want to scale your core switch(es) to that level, as it'll be more expensive. In a situation like that, access switches make sense.

The kind of stuff we do...collapsing everything into core switch(es) is completely practical. We aint runnin AWS from our garage, are we? :)
Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.

Just wanted to conform to best practice based on the article.
 
  • Like
Reactions: Jason Antes

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.

Just wanted to conform to best practice based on the article.
I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons? :)

My business can certainly withstand the downtime it'd take to replace the switch with the cold standby (the configuration is backed up automatically, so it is a simple matter of de-racking/racking and connecting cables). Switches rarely die.

If your business absolutely must have the uptime, then yes, a stack is certainly a must.
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons? :)

My business can certainly withstand the downtime it'd take to replace the switch with the cold standby (the configuration is backed up automatically, so it is a simple matter of de-racking/racking and connecting cables). Switches rarely die.

If your business absolutely must have the uptime, then yes, a stack is certainly a must.
Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.

What are you using in your compute and storage stack? Are you using a firewall like pfSense as well?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.

What are you using in your compute and storage stack? Are you using a firewall like pfSense as well?
pfSense = yup.

Funny, you bring it up...:) I'm actually in the middle of reconfiguring my network/stack as we speak. Mostly related to consolidation and segregation. And I wasn't running a physical DC up until now, which creates some interesting DNS issues in a virtualized environment... Anyway, my new "stack" will be as follows:

- A physical domain controller
- ICX-6610 as the core/everything switch (with specific DNS things tied to the DC, in terms of using hostnames in ACLs and what not)
- A pair of bare metal Windows servers with Starwind in an HA config as the SAN - Everything below this runs off of the SAN, no local storage at all)
- A lightweight virtual server to host the core things (pfsense, nginx, second DC, WSUS, vCenter etc etc)
- As many compute servers as I need. Currently at 7, but will be consolidating them into 3-ish in the next few weeks.

I'm still vacillating between keeping pfSense virtual or move it to bare metal. I have no performance issues with it being virtual pre se.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Anyone here that can give me "real" power usage on the ICX 6610? - hopefully the 80W quoted on the front page is max load with all ports going full tilt?

Regards
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
So if I want to exchange my ICX-6450-24 for something that has:

8-12 QSFP+/SFP28 ports (40Gbps min)
12+ RJ45 1 or 10Gbps ports

Is there an ICX or something else that might fit which does not use 80w idling?

Right now I have my ICX6450+Mellanox SX6018 and combined they use around 60w idling/low usage which is acceptable, but I would love to get one switch for it all, just to get rid of some cables for uplinks between the switches.
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
pfSense = yup.

Funny, you bring it up...:) I'm actually in the middle of reconfiguring my network/stack as we speak. Mostly related to consolidation and segregation. And I wasn't running a physical DC up until now, which creates some interesting DNS issues in a virtualized environment... Anyway, my new "stack" will be as follows:

- A physical domain controller
- ICX-6610 as the core/everything switch (with specific DNS things tied to the DC, in terms of using hostnames in ACLs and what not)
- A pair of bare metal Windows servers with Starwind in an HA config as the SAN - Everything below this runs off of the SAN, no local storage at all)
- A lightweight virtual server to host the core things (pfsense, nginx, second DC, WSUS, vCenter etc etc)
- As many compute servers as I need. Currently at 7, but will be consolidating them into 3-ish in the next few weeks.

I'm still vacillating between keeping pfSense virtual or move it to bare metal. I have no performance issues with it being virtual pre se.
Are you using the free version of Starwind? I used Starwind 7 or 8 years ago..
 

carcass

New Member
Aug 14, 2017
14
2
3
41
I've just received 7150 from ebay, and looks like it won't go past bootloader:

Code:
Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
Any idea what is wrong and if anything can be done here?

Update. Sometimes I'm getting this during boot:

Code:
Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
initcall sequence 9f4944f0 failed at call f000eca8 (err=1)
### ERROR ### Please RESET the board ###

Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
ERROR : memory not allocated

Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
raise: Signal # 8 caught
 
Last edited:

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
I've just received 7150 from ebay, and looks like it won't go past bootloader:

Code:
Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
Any idea what is wrong and if anything can be done here?

Update. Sometimes I'm getting this during boot:

Code:
Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
initcall sequence 9f4944f0 failed at call f000eca8 (err=1)
### ERROR ### Please RESET the board ###

Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
ERROR : memory not allocated

Ruckus Wireless Bootloader: 10.1.11T225 (Dec 13 2017 - 03:13:30 -0800)

Booted from partition 1
DRAM:  Validate Shmoo parameters stored in flash ..... OK
raise: Signal # 8 caught

looks like bad RAM to me, does it ever at least drop you to a bootloader command line?