Brocade ICX Series (cheap & powerful 10gbE/40gbE switching)

hmw

Active Member
Apr 29, 2019
199
61
28
$40 is a lot more than $15 but might be worth it not to have 5m of cable coiled in my rack.
That link was for 5m - the 0.5m & 1m is like $18. fs.com shipping is ~ $12, so more like $30 in total

btw if you wanted 40 gbe with SR-IOV, here’s a single port ConnectX-4 LX on eBay: Mellanox MCX4131A-GCAT_C05 ConnectX-4 LX 50GbE PCIe PCI-E NIC Newest Firmware | eBay. It’s a lot more than $40 for the CX3 but the CX4 can offload more, supports RoCE and also supports multi host virtualization
 

EngineerNate

Member
Jun 3, 2017
61
10
8
31
Gotcha.

How important is SR-IOV? I've been running ConnectX-2 cards for the last few years for 10G, if that impacts anything.
 

hmw

Active Member
Apr 29, 2019
199
61
28
Gotcha.

How important is SR-IOV? I've been running ConnectX-2 cards for the last few years for 10G, if that impacts anything.
If you have a hypervisor, it's great. For instance in ESXi, you should see lower CPU utilization with 10GbE or 40 GbE network cards, that's because you're really doing offloads. With non SR-IOV, the "offload" is handled by the VMXNET3 driver - you're using host CPU cycles in the end.

If you're not using 10GbE or 40GbE, it doesn't make that much difference unless your guests specifically need native drivers.

Just keep in mind that with ESXi, using SR-IOV means you cannot take snapshots or use vMotion. And a lot of times you have to lock or reserve guest memory

But as @klui pointed out - the CX-3 cards work fine in many other OSes and are 4x cheaper than CX-4 cards. And they work perfectly with the QSFP ports on the ICX6610. So YMMV :)
 
Last edited:

vangoose

Active Member
May 21, 2019
263
69
28
Canada
Are there any concerns I should be aware of if I move it out to my garage which at the top end is about ~29C,90F and somewhat humid?

Mine is running at over 90 C.
The non-poe version only has 1 fan. POE version has 2 fans and runs much cooler.
 

safrax

New Member
Jun 21, 2020
5
0
1
Not sure if this is of interest to anyone else but I managed to upgrade from 8.0.80e to 8.0.90d on my ICX-7250-24 tonight. Here's what I did:

I registered on Ruckus' website and downloaded the latest zip for the ICX-7250. After that I extracted the zip and tftp'd over the spz10115.bin boot-loader. I then tftpd'd over the SPR08090d.bin firmware and reloaded the switch. It took a little longer than usual to come up after the reload. Per the manual I then tftpd'd over the SPR08090dufi.bin firmware. This took substantially longer, ~7 minutes to come back after a reload. From what I can gather it appears everything upgraded successfully and I am not missing functionality, but I am by no means anywhere near an expert with networking gear.
 

vangoose

Active Member
May 21, 2019
263
69
28
Canada
Not sure if this is of interest to anyone else but I managed to upgrade from 8.0.80e to 8.0.90d on my ICX-7250-24 tonight. Here's what I did:

I registered on Ruckus' website and downloaded the latest zip for the ICX-7250. After that I extracted the zip and tftp'd over the spz10115.bin boot-loader. I then tftpd'd over the SPR08090d.bin firmware and reloaded the switch. It took a little longer than usual to come up after the reload. Per the manual I then tftpd'd over the SPR08090dufi.bin firmware. This took substantially longer, ~7 minutes to come back after a reload. From what I can gather it appears everything upgraded successfully and I am not missing functionality, but I am by no means anywhere near an expert with networking gear.
Better stay in 8.0.80 train, it's more stable.
 

HeatsinkedTurtle

New Member
Jun 30, 2020
1
0
1
Hello everyone and thanks to all of you for this great thread.
After finding this thread I decided to order a ICX6610 24 port (non-poe) and managed to get one with a Rev. C PSU which was great.

However I am experiencing issues with connecting to my device's serial port.
In summary there is no USB -> RJ45 serial adapter available near me but I managed to get a USB -> Serial DB9 RS232 (male) and a pin-it-yourself adapter from Serial (female) to RJ45 (female) and I have not been able to find a working pin setup.
The USB to Serial adapter looks identical to this but with a different brand.
The Serial (female) to RJ45 (female) adapter looks like this but once again is of a different brand.
The only cables I have at home are Cat6 568b straight through.

I have tried multiple pinouts without success.

For software I have been using Putty with just setting to the correct serial interface and attempting to connect.

So the question is now if there is any way to make all of this work.

If so, how do I go about making this work?


Edit: And now the pin removal broke so I have given up on this project and ordered a USB to RJ45 serial Cisco cable online instead.
It appears like they got back in stock over the weekend.
 
Last edited:

EngineerNate

Member
Jun 3, 2017
61
10
8
31
I just noticed something on the switch I bought. The PSU is labeled at 250w. RPS15-I. Did I get suckered into bidding on a Poe switch sold with the non-poe PSUs?
 

EngineerNate

Member
Jun 3, 2017
61
10
8
31
Yes I ordered the reverse flow on purpose but I didn't realize there were PoE and non-PoE versions of the PSU. I've requested a cancellation.
 

WANg

Well-Known Member
Jun 10, 2018
848
480
63
@WANg had a similar issue while on ESXi 6.0u3: https://forums.servethehome.com/ind...orking-correctly-in-esxi-6-0u3-patch-22.28492

But he wanted to upgrade to the latest OFED driver bundle. I wonder if the older driver in 6.5 could be used in 6.7/7.0? I'm still on 6.5.
6.7 yes, 7.0 no. 7.0 does not have the VMKLinux support at all, while 6.7 is not enabled unless you specifically disable the native driver and enable the VMKLinux driver. It’s not like the card doesn’t work on the native driver - it just doesn’t give you SRIOV.
 
  • Like
Reactions: hmw

kapone

Well-Known Member
May 23, 2015
784
383
63
Btw, if anybody's curious...

The 6610 works fine with RDMA/iSER, without any DCB/PFC support, even up to 40gbps. I took this long weekend as an opportunity to do some testing (when I could...and the kids let me).

Two StarWind vSAN hosts (bare metal Windows boxes) in HA with a dual 40gb CX-3 Pro card in each. One test ESXI 6.7 box with the same CX3-Pro 40gb Nic.

iSER worked out of the box (Roce v2 is default on the CX3 Pro cards), the data stores were detected right away, and with StarWind configured with L1 RAM cache (flat disks, not LSFS), the performance was equal to or BETTER than native (in some cases) disk performance. My testing first included RAM drives on both ends just to benchmark and see if the switch would pose an issue, it did not. Then I tested real world disk scenarios with RAID0/10/6/60 on the StarWind nodes (with RAM cache).

In all cases I did not see any loss in performance from native.

Note: StarWind HA works great btw, I stress tested it by starting a Windows install on test VM on the ESXI datastore (over iSCSI), and pulled the plug on one StarWind node while the install was running. No hiccups.

p.s. I have no affiliation with StarWind as such. I just like their implementation.
 

klui

Active Member
Feb 3, 2019
161
67
28
The 6610 works fine with RDMA/iSER, without any DCB/PFC support, even up to 40gbps.
Could you share port and initiator/target configurations for RDMA?

I thought RoCE requires global pause frames or ECN/PFC. I've been told Mellanox ZTR (zero-touch-RoCE) is supported on CX4 and newer, not CX3-Pro.
 

kapone

Well-Known Member
May 23, 2015
784
383
63
Could you share port and initiator/target configurations for RDMA?

I thought RoCE requires global pause frames or ECN/PFC. I've been told Mellanox ZTR (zero-touch-RoCE) is supported on CX4 and newer, not CX3-Pro.
I dismantled the test setup after my testing, but there wasn't anything "special".

- Starwind configured with standard iSCSI (port 3260), iSER enabled on the interfaces.
- ESXI configured for iSER - which involves a few steps, but nothing exotic.
- No configuration on the 6610 at all, other than adding all of the above machines to a VLAN.
- Dynamic targets added via ESXI UI.
- Test.