ConnectX-3 /w QSFP to 4x SFP+ b/o Cables as 4 Router IPs to Streamline 10Gb Network?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Dec 4, 2024
37
4
8
This is my first post, but I've stumbled upon some absolute gems of info here and thought I'd give posting a nuanced question try. I'm getting a feel for the ettique here, so apologies and corrections are welcome.

Since I have some nuances to my home lab network (don't we all?) I'm going to do a TL;DR here, and then get into the gory details for my specific circumstances, below.

TL;DR version:

Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology? (Such as using 2 or more instances of pfSense with the 40Gb b/o cable segment used in each WAN of the individual software routers)



I will try to give enough detail and backstory for my stated goals, below:


I keep returning to this post and comment in my research for how this might be possible. I find it an interesting idea for how to potentially make QSFP to 4x SFP+ breakout cables "act" as 4 different connections.

I had a few thoughts on possibilities to get this to work, also, but the folks in the STH forum seem far more knowledgeable with Linux Networking than I am. I am not committed to the one or two possibilities for "Getting it to work" that I have in mind, I just want to try to streamline and reduce *some* complexity that had accumulated over the last few years of learning more about "commerical grade" hardware and networking.

In the linked comment and post above, from a few years ago, I was interpretting what was being suggesting as "4 virtualized NICs for each port" - similar to the concepts from this 2020 StackExchange question here: How is it possible to get multiple virtual network interfaces with only one physical network interface?

I envisioned the configuration for getting something like this to work from the "hosting network machine (Which may or may not be virtualized)" looking something like this: (From Creating multiple virtual adapters over a single physical NIC)


```
#echo /etc/network/interfaces

auto eth0:1
iface eth0:1 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx

auto eth0:2
iface eth0:2 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx

auto eth0:3
iface eth0:3 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
```


Is that "where it was going" with that comment/thread? Or am I way off base?


My 10Gb Network goals do not currently include 1+Gb Internet, since that's just NOT an option available for my location (but I wish it were!). So the 10Gb Network is more for faster backups, better RDP and VNC quality, faster/easier Migration of VMs.

So I will reiterate that my primary goal is to replace the ridiculous number or Routers and Switches I've been collecting over the last 5+ years to solve different problems, and streamline my entire network topology into 3 to 5 subnets with as few routers and switches as possible

As long as each PC/Server that is 10Gb capable has at least one connection to a 10Gb switch (or perhaps directly connected 10Gb ports between PCs, where it might make sense), I will consider it "done". And, if we ever manage to get 2.5Gb+ Internet available in my area, I'll only need to change one connection.


At one point before I started to get rid of some things, I counted up all my mostly wireless but some wired-only routers that I had collected over time and it was "over the count of 10". The count of switches that I had was actually even higher. To be fair, I did have more PCs back then, but still, the amount of "stuff" in my homelab was starting to look like the dorm rooms of Larry Page and Sergey Brin. I don't need all that anymore (did I ever?), since I've backedup and migrated a lot of the hardware-centric tasks to VMs.

I would like to continue this trend with moving a lot of the remanants of 1Gb networking to just one wireless router and one 1Gb switch, ditching all the crazy DUAL WAN configurations, BONDed, LACP, etc connections and simplifying to fewer, faster and more efficient subnets.


I plan to use what I have dubbed as a "Router Box" - just a moderately powered mATX PC - to replace / upgrade / simplify the network topology (at least more simple that it is today, anyway). I am already using the "Router Box" with a Mellanox ConnectX-3 40Gb QSFP and the 4x 10Gb SFP+ breakout cable, but with only 1 of the 4 connectors. It works fine like that, even though I know these cables are designed for 40Gb Mellanox Switches and not the ConnectX-3 Adapters... but...

The above-mentioned comment and thread got me thinking about what the best Bare Metal configuration is for "The Router Box".Some thoughts and Ideas I'm trying to get any one of you to talk me out of:
- Ubuntu/Debian + VMWare Workstation
-- maybe Debian or Fedora with libvirt/virtsh instead?
- Maybe just install ProxMox Xen/XCP-ng with a few pfSense (or w/e software router) instances in VMs to provide (up to) 4x 10Gb subnets for the network?

I current have 3 "cheap offbrand switches" that are 10Gb capable, with a total of 4x 10Gb SFP+ ports open (6 minus the 2 ports consumed by the connection of 2 switches that are already connected to each other). I *COULD* break down and buy a somewhat pricey 8 (or more) 10Gb SFP+ port Managed Switch, instead of using 2 unmanged and one managed, all 3 with 4x 2.5Gb (RJ45) + 2x 10Gb (SFP+) configurations. But then I would have to run additional CAT7 lines through the "home that is becoming a lab, instead" and that is the sort of thing I'm trying to avoid, hence having a single 10Gb line in/out of the rooms is most ideal.

I really only need at least 2 or 3 of the subnets to be 10Gb capable and the rest at 2.5Gb or 1Gb would be fine. But if I could sort out 3 or more, that would be amazing.

I have a 2x 25Gb SFP+ port Mellanox ConnectX-4 card for the "big server", which sits in the same room as "The Router Box", so I'm thinking maybe 1 or 2 direct connections from the 4x SFP+ b/o cable with 2x 10Gb subnets,. If there is a way to make it work, I would surely eventually attempt some kind of BOND or LACP for 20Gb, I'm sure, but I would need to "feel like 10Gb is too slow", first.

I already have a 2nd 40Gb 2 x port Mellanox ConnectX-3 with a QSFP to QSFP cable for a "NAS Box", too - I consolidated most of my storage into a single rig, for reasons mentioned above.
But I haven't yet experimented with the 40Gb QSFP to 40Gb QSFP a lot because it's tedious to physically access the "server shed" (outside - to keep the noise down). I'm waiting to experiment and test if I can get 4 connections from the QSFP to x4 SFP+ b/o cable to decide whether buying a Mellanox 40Gb Switch that is ethernet-capable is the better route to take.


So... sorry for the length all the details, but they felt relevant as I see a lot of "why don't you just..." or "why are you doing it that way" responses of other parts of the internet. Hopefully there is enough context there to understand there are multiple rooms involved, each with unique challenges.


I post to ask the experts for some feedback and/or better ideas on how to achieve these goals.


I currently have 2 thoughts on this possibly working without (buying more and more networking hardware... again...)

Thought #1:
Run Linux on Bare Metal for "The Router Box", using the onboard NIC for "Internet Access" and the 1 QSFP port with the 4x SFP+ breakout cable having individually assigned CIDRs; something like:
- 10.1.0.1/16
- 10.2.0.1/16
- 10.3.0.1/16
- 10.4.0.1/16

This guide does ok at interpretting what I was thinking for the "Linux Router configuration": Configure Linux as a Router (IP Forwarding)

The upside here for "Thought 1" is that it leaves me with possibilities to natively configure stuff like DNS, NFS, a collection of Docker/Containers, adding multiple different Hypervisors like libvirt/virtsh/vmm, Vbox, VMware Workstation, etc etc until I run out of CPU or RAM resources, which I can easily "partition" or slice between hypervisors and the guest VMs.

--------

Even though there are benefits to having "natively installed software services" on Linux, managing Firewalls with Bare Metal Linux networking kind of sucks and its a lot to "keep straight" in memory, especially after awhile. I haven't found a great tool for keeping track of Network Rules with an easy interface for making tweaks and changes, like those baked-in for pfSense or OpenWRT/DD-WRT, etc.


So I am leaning more towards "Thought #2":
Run a Hypervisor on Bare Metal (most likely ProxMox) and configure multiple Guest VM instances of pfSense/Software Routers and "virtually" wire the single QSPF port with the 4x SFP+ breakout cable connected as the WAN, each pfSense VM instance having different CIDRs and connecting to different SFP+ ports of physical 10Gb Switches OR perhaps to directly connected to other machines in the home lab with NICs that have open SFP+ ports.

The downside to Thought #2, is that I have to either use a lot of 10Gb switches and sort out how to interconnect each subnet (or not), but IMO opinion that is a better downside than having to manage all of the network rules without a decent GUI interface.


But this is also a major assumption that it will actually even work with the ConnectX-3 breakout cables.


My "Big Ask" is:
Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology?

The internet is also AWESOME at telling people why they are wrong. So I am playing to this strength by testing my own weaknesses and trying to get someone to talk me out of doing either idea by explaining why it won't work, too.

I'm lowkey hoping that someone else has already found a slick and awesome method to using all 4x SFP connections for one of these breakout cables on the ConnectX-3 that doesn't require I buy a Managed 40Gb Mellanox Switch. I'm open to that possibility, too (on the cheap), assuming there is a decent GUI to manage the interfaces, or a visual tool of some sort that can help make sense out of the current firewall/port configurations set from a CLI. But from what I can tell, in only a few hours of research, dealing with "Mellanox licenses" for Web GUI management tools for somewhat obscure and EOL-unsupported networking devices doesn't seem like a great experience.

Any help or insights would be greatly appreciated!
 

kapone

Well-Known Member
May 23, 2015
1,362
809
113
I…am not sure, if you’re doing this just for fun, or do you wanna get your shit right.

A Brocade 6610 is peanuts these days, and gives you 16 10gb ports (and two 40gb ports- no breakout). It’s an enterprise grade L3 switch, and you can knock yourself out creating your subnets.

What am I missing??
 

RecursiveG

New Member
Dec 28, 2023
25
10
3
You may want to try the breakout cable with two 10G devices connected and see if they work. IIRC the CX3 doesn't support spliting the QSFP port like that.
 
Dec 4, 2024
37
4
8
I…am not sure, if you’re doing this just for fun, or do you wanna get your shit right.

A Brocade 6610 is peanuts these days, and gives you 16 10gb ports (and two 40gb ports- no breakout). It’s an enterprise grade L3 switch, and you can knock yourself out creating your subnets.

What am I missing??


It's a mix, really. Half of the network topology is "work" and the other half is "play/learn" - Sucks when the play/learn brings it *all* crashing down, though. Lessons have been learned and issues mitigated. And the reason for the post is exactly that - getting my shit right(er) - The hodgepodge mess of 1Gbe and 2.5Gbe switches needs to go. I'm moving on to 10Gbe+ now and I don't need all the crazy, anymore (lil bit a crazy is ok though, assuming its for optimization).

The Brocade 6610 is a brilliant suggestion: Brocade Icx6610 40gbe for sale | eBay

I was not aware there were Brocade models this cheap with 40Gbe uplinks, that amazing amount of 40Gbe uplink might seal the deal for me.

I'm scoping this great post out here: https://forums.servethehome.com/ind...s-cheap-powerful-10gbe-40gbe-switching.21107/

Looks like the Q/SFP breakout cables work on these, too - awesome! I'm definitely scouting more of these for the eBay watchlist.

Still a few questions Brocade Swtiches I can't seem to find (satisfactory) answers for:

- "Licenses": I see there are a lot of posts related to licenses for different "features" with both Mellanox and Brocade switches. On a scale of 1 to 10, how much of a challenge and pain-in-the-ass is it to enable 40Gbe and 10Gbe, WebUI Management, and maybe anything else I've missed because I don't have the hardware in hand to find out? - Craft Computing ran into this here -
- but I understand that there are some workarounds and other avenues that might get the job done, too (for the Savvy).

- Interoperability: Are there any insights on how well Brocade gear plays with the "offbrand" stuff, like YuanLey, STEAMEMO, Trenda, TP-Link, etc? I'm not familiar with Brocades (yet?) other than "I've heard the name". I've been rocking old/cheap 1Gbe Cisco 16+ port switches for the last few years and I'm ready to yank them since I have plenty of open 2.5 ports and/or 1Gbe switches for what 1Gbe there is left in my network. But one of the nice thing about having multiple Cisco switches has been similar/consistent options in the configs that seem to play nice with each other, and so-so nice with other network switch brands, but "usually". Brocade is currently "a devil I don't know".

- Firmware Updates and "OS Version": I assume there likely are NOT any new firmware updates for Brocade switches more than 5-10 years old, as is the case with most other brands/vendors for gear this "old". So I also assume there are some CVEs that will never get patched, too. But this appears to be the last/latest firmware from 2020? Is that accurate? -> Ruckus ICX 7xxx/ICX 6xxxx Campus Switch Firmware Download | Software Downloads | Ruckus Wireless Support - OR did Broadcom bury the firmware links when they bought up Brocade (like with LSI controllers....)?

- Security: I did a quick scan for CVEs and I wasn't seeing anything specific to this model, but to the "Brocade OS" as a whole and the "ICX Series". Are there any severe "security gotchas"? I assume not providing publicly available ports to log in with is a given, but are there any super sneaky hackery hacks from Brocade Switches like there are with Cisco and other big names? I see there are a few patches available, which seems pretty generous and awesome (compared to Cisco) - Ruckus Wireless Support

- Power Supplies: How durable and/or hard-to-find-replacement-for the PSUs for Brocade switches? I have limited UPS/BBU ports but my area tends to have lots of power flickerings that occassionally-but-more-often-than-i-like smoke my PSUs and batteries.

- Management Interface (UI and/or CLI/SNMP): I found this tut from BrocadeCampus for configuring the WebUI -
So that's covered, too. Awesome. But what's the Web UI experience like in 2025? Modern Browsers tend to choke and scoff at older SSL/TLS certs. Is there at least TLS v1.3 certificate support? Is there a janky Java-hacky workaround? Does using the older Firefox Portable v4.0.1+ or a dusty old Windows Vista VM come into play for accessing the UI without issues?

- Model differences: Are there any significant differences between the 24 and 48 port models, aside from physical ports? I don't really need even 24-ports of 1Gbe anymore, but maybe I can do some LACP to another switch for a speed boost, if it makes sense.

- LACP group limits: Most switches that I've seen that support LACP/Bonds etc have a limited number of groups and/or limited to how many ports can be used per group, but I'm assume "at least 4 groups" and "at least 4 ports each" - which would probably be just fine, but we all know what happens when you assume...

- NOISE, Fans and Operating Temps: I'm seeing there are a LOT of people talking about how noisy these things are. In my use case, I would probably have it in an enclosed-but-non-air-conditioned space just outside of the house (tech shed), but then it seems so loud that neighbors might even gripe or it could be heard through windows. And the fact that I'm seeing these units already run very hot, the concern is temps in the hot months of the year. I see a bunch of wild and crazy fan mods too - https://forums.servethehome.com/index.php?threads/anyone-done-a-brocade-6610-fan-mod.24039/page-4
Is it just a specific Power Supply (version C) that will dramatically quite things down?
or which of these fan mods is the easiest and most effective? I would probably rather "buy less parts" and "make it less of a project" by hacking/ cutting/drilling a fan or 3 into the lid, versus messing with a lot of soldering and Arduino etc. But are there additional pinouts on the board for the added fans? or is there some adapter or splitter to power added new fans? I'm not seeing where these "hacked on fans" are getting powered?
 
Dec 4, 2024
37
4
8
Read the first post of that thread - https://forums.servethehome.com/ind...s-cheap-powerful-10gbe-40gbe-switching.21107/

Then go to @fohdeesha site (linked there) and read some more.

p.s. Forget about GUIs and start learning the CLI if you wanna play with enterprise gear.

On it. I missed that link to the licensing V2 guide, thanks for that!

Also, what tools/tricks are you using other than "memorization" for which ports are forwarded, what IPs/CIDRs are allowed through which subnets/segments, etc when you are configuring from the CLI/SNMP etc?

I tried my hand at using CLI with Cisco stuff, and it worked fine, until I came back a few weeks later and had totally forget which rules went to which VM / BM boxes and other routers/switches, yadayada.

In a lot of ways, CLI is easier and faster, but I did not find any kind of a toolset to help keep all of the settings, config, forwards, services running, etc all straight and orderly. Probably because I had WAY too many network devices on the network, but also none of the open source tools I found to help keep some kind of organized and easy to reference list of all my network "settings" at hand really provided much help.

Even doing `show running-config snmp` and-the-like got to be kind of absurdly long in the tooth where I had to start piping out snmpwalk with | more ( | less) just to see everything, and then wire my brain to spot everything I needed to touch/tweak/remove/add etc.

Any suggests for such network cli management tools? I'm not looking to become a Network Engineer, just get better performance between devices and storage on the cheap for building POC Apps, CICD Pipelines, K8s Clusters and some AI/ML Experiments. But that said, I'm admittedly newish to Enterprise Network Gear (3 years, maybe? since I found some cheaper Cisco off-lease stuff) so I'm willing to learn, but not sure where to start with CLI-managed networks.
 
Dec 4, 2024
37
4
8
Anyone with any suggestions at all for tools that help with managing multiple switchs and routers with SNMP (or whatever) from the command line?
 

richardm

Member
Sep 27, 2013
48
16
8
I've been told physical break-out of a 40Gb port into 4x 10Gb is a switch thing, not a NIC thing. The notable exception is certain models of the Intel 710 series.

As for getting one NIC to populate as multiple NICs on the PCIe bus inside the host (totally unrelated to the above) most 10/25/40/50/100 Gb adapters support this in one form or another. NPAR is the simplest and most straightforward; SR-IOV is a touch more sophisticated.

I'm in the same boat -- In my lab I had deployed 10Gb over copper about six months ago. Some 2-3 weeks ago I went to 25Gb fiber. I don't want the heat/noise/power consumption of an enterprise switch so I'm looking for the smoothest and simplest way to maintain connectivity between the 25Gb lab hosts, their VMs, and the rest of the household on 1Gb (or WiFi). Should I route it at layer 3? Bridge it at layer 2? Stuff the lab junk behind a NAT? Many possibilities here...
 
  • Like
Reactions: coolelectricity
Dec 4, 2024
37
4
8
mellanox NICs do not support qsfp breakout. also craft computing is ****ing clueless
I've watched a few other videos of his and I came to the same conclusion with the Mellanox 40Gbe fail. o_O Respect for him trying to learn, but clearly he did not find your AMAZING web page that would have saved him for posting a fail video. I can appreciate his humility and honesty though. It helps to "feel the sting" of mistakes others have made, so it REALLY sinks in (at least for me).

I'm here on STH after just lurking around and stumbling across STH for a LOT of stuff I've been tweaking and learning over the last few years, trying to get my knowledge and experience up. It's not easy, but I'm definitely in the right place. But yea, some of the other comments he and some others make during setup / explanation I cringed at.

I didn't consider things like noise, power draw, temps, transceiver module compatibility (Which may end up being an issue with my QSFP+-to-4x breakout calbes...), "license-only features" (Re: iDRAC), "branded features" (Re:RSTP) until I got an R710 PE and a Cisco Catalyst switch. I've been building PCs and smaller networks since the Intel Spacesuit days, but Enterprise Grade gear has a lot more caveats. I'm guessing a lot of other "Craft-like" folks have gotten humbled and embarassed too. I sure as shit have... I guess the difference is that I'm willing to approach it with a "maybe I'm actually ****ing clueless" way, so I might actually learn something (especially from the good advice and/or mistakes of others that want to help with there knowledge and shared experiences).

TL;DR - Y'all are on point at STH. (Particularly Fohdeesha. THANK YOU for all that work on the PERCs and Brocade breakdowns. AWESOME!)

Back to the "simplified homelab" project, though: I have a bit of a mess with my homelab right now as I try to consolidate everything and start a purge / sell off, so I haven't tried that whacky idea of setting up multiple psSense/OPNsense VMs on the 40Gbe QSFP+ port with the breakout cable, but from some other posts I've read in here it probably won't work.

That said... 1x port of the "40Gbe QSFP-to-4x 10Gbe DAC breakout" actually works very well for 10Gb to the ConnectX-3, but that's probably the best I can do with it until I make a decision of a Brocade Switch.

Oddly (or perhaps not?), I found the breakout cables at nearly the same cost as 40Gb QSFP-to-10Gbe trasceivers + DAC cables, so I'm not stinging too badly from my "Craft-like assumptions", but as I am trying to simplify my setup, have giant wads of unconnected DACs isn't bringing me any joy... But I do think have a single Switch to handle all things "in the tech shed" is the way to go for me.

I would also like to take this opportunity mention the appreciation for the HILARIOUS banner for "Brocade Beef" - I still chuckle everytime I see it.

I've ruled out Mellanox switches (for the moment?) since the noise/thermals/power draw seems astoundingly higher than some of the Brocade options I'm narrowing it down to, and most of the 56/40Gbe Mellanox switches I've seen available seems to be ONLY equipped with QSFP ports. What I'm loving about the Brocades (which I hadn't even looked at before, until this month) is that they have every type of port/speed I want/need "all in one".

So... I'm currently looking at the ICX-6610.

- The Brocade ICX-6610 fits my needs AND wants for 40Gbe - It's currently at the top of my list - 48p versions are abundant, as many probably already know. I don't (or more aptly => "won't") need even 24p after this "great HL reduction" is over with, but given the price-per-performance and potential resale values as "home networks" start seeing more 10Gbe in the future, the ICX-6610 makes a TON of sense. I (clearly) already have the breakout cable(s) and buckets of cheap/older 10+Gbe NICs as well as a 40Gbe Mellanox "NIC", so this fits well with my current "inventory".

But, I already have QSFP to QSFP 40Gbe cables and the QSFP to 4x 10Gbe DAC breakout cables that (I think?) are "Cisco-chipped" - is this going to be a show stopper for the Brocade, since the "DAC ends" are chipped for Cisco and not Brocade? (I hope not... but...?)

Also, I have not yet had to go through the "licensing process" for enabling features in a Switch, but the guides on STH here (and the linked sites) seem AWESOME.

I'm not sure what to expect just yet, aside from some CLI action.
*cracks knuckles in preparation to "tickle the ivories"*

So what might I need to get started with setting up the breakout cables for 10Gbe on the once the ICX-6610 arrives?
 
  • Love
Reactions: fohdeesha

kapone

Well-Known Member
May 23, 2015
1,362
809
113
I've ruled out Mellanox switches (for the moment?) since the noise/thermals/power draw seems astoundingly higher than some of the Brocade options I'm narrowing it down to, and most of the 56/40Gbe Mellanox switches I've seen available seems to be ONLY equipped with QSFP ports. What I'm loving about the Brocades (which I hadn't even looked at before, until this month) is that they have every type of port/speed I want/need "all in one".
You've got more reading to do...

Yes, the Brocades have 10g SFP ports as well as 1gbase-T ports, that the Mellanox switches don't have. If that's needed, then it's needed. "noise/thermals/power draw" - uhh... :) I know the Brocades inside out, had been running them for years, and recently switched to Mellanox. The Mellanox SX6036 (with 36x 40/56gb ports that can be broken out into many different 10/40/56g configurations) has software control of fans...at ~16-20% PWM, they're perfectly acceptable unless you're sitting next to them (even then they're not obnoxious), consumes 35w...at idle (vs the ~80w for the ICX6610-24, no P, the 48 idles ~100w and the P versions go up from there).

But, I already have QSFP to QSFP 40Gbe cables and the QSFP to 4x 10Gbe DAC breakout cables that (I think?) are "Cisco-chipped" - is this going to be a show stopper for the Brocade, since the "DAC ends" are chipped for Cisco and not Brocade? (I hope not... but...?)
The Brocade will take almost any DACs/Optics.

So what might I need to get started with setting up the breakout cables for 10Gbe on the once the ICX-6610 arrives?
Coffee. Lots of Coffee...
 
Dec 4, 2024
37
4
8
You've got more reading to do...

Yes, the Brocades have 10g SFP ports as well as 1gbase-T ports, that the Mellanox switches don't have. If that's needed, then it's needed. "noise/thermals/power draw" - uhh... :) I know the Brocades inside out, had been running them for years, and recently switched to Mellanox. The Mellanox SX6036 (with 36x 40/56gb ports that can be broken out into many different 10/40/56g configurations) has software control of fans...at ~16-20% PWM, they're perfectly acceptable unless you're sitting next to them (even then they're not obnoxious), consumes 35w...at idle (vs the ~80w for the ICX6610-24, no P, the 48 idles ~100w and the P versions go up from there).


The Brocade will take almost any DACs/Optics.


Coffee. Lots of Coffee...

Ah INTERESTING! So you've switched *TO* Mellanox Switches? (try asking that 5 times fast)

Was it primarily just for the power saving or did you have some specific featureset that you wanted from Mellanox? I ask because after counting up the 1Gbe ports I'll actually need/have to use after the "great reduction", I could actually "get by" with what 2.5Gbe ports I have on my already low-powered 10Gbe capable switch. It feels like a "waste" to use 1Gbe NICs on 2.5Gbe capable ports, but so does having a !@#%-ton of empty/open >1Gbe ports... But its the lack of SFP+ ports on the 40+Gbe QSFP equipped switches that troubles me most. My U2 Server Chassis is equipped with 10Gbe RJ45 ports already, but I have spotted a few 2port Mellanox-friendly QSFP 40Gbe port "add ons" that are reasonably priced. I'd still end up with at least 1 or 2 extra 10Gbe SFP+ capable switches in my final result if I did go for the Mellanox variety of QSFP-only switches, but if theres a justification for features or future proofing, I'd love to hear it before I click this Buy button on the Brocade this week :D

Also, YES. Much more reading + Coffee is key to success :p
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,362
809
113
Ah INTERESTING! So you've switched *TO* Mellanox Switches? (try asking that 5 times fast)

Was it primarily just for the power saving or did you have some specific featureset that you wanted from Mellanox? I ask because after counting up the 1Gbe ports I'll actually need/have to use after the "great reduction", I could actually "get by" with what 2.5Gbe ports I have on my already low-powered 10Gbe capable switch. It feels like a "waste" to use 1Gbe NICs on 2.5Gbe capable ports, but so does having a !@#%-ton of empty/open >1Gbe ports... But its the lack of SFP+ ports on the 40+Gbe QSFP equipped switches that troubles me most. My U2 Server Chassis is equipped with 10Gbe RJ45 ports already, but I have spotted a few 2port Mellanox-friendly QSFP 40Gbe port "add ons" that are reasonably priced. I'd still end up with at least 1 or 2 extra 10Gbe SPF+ capable switches in my final result if I did go for the Mellanox variety of QSFP-only switches, but if theres a justification for features or future proofing, I'd love to hear it before I click this Buy button on the Brocade this week :D

Also, YES. Much more reading + Coffee is key to success :p
I switched to the SX6036, because I needed more 10/40g ports than what the 6610 could provide. I have it paired with a Brocade 6450-24 (with a 2x10g dynamic LACP connection between them), because I still needed a "few" 1gbase-T ports. The combined stack still idles at less than 60w, and now gives me more than enough 1/10/40/56g ports.

And..ConnectX-3 Pro nics are dirt cheap...can run ROCEv2 all day long with that switch.

There's a bit of learning curve to the Mellanox switches (the CLI is slightly different...for the licensing, there's a thread right here on STH that you have to read very very carefully...I mean it, or you'll miss it :) ). I mean, I setup all the VLANs, routes, etc etc and deployed the two switches, and my VLANs wouldn't talk to each other. Had never happened before, and I've been doing L3 switches for a while.

Apparently you actually have to issue a command (ip route) for it to start L3 routing. :) Threw me for a loop. The other thing is the control plane. It's slow...like slow...you get used to it. Kinda.

But they take almost all DACs/Optics as well, power consumption and noise is low and the port density is higher. What's not to like?

Edit: Port breakout possibilities with the SX6036.

1736531475454.png
 
Last edited:
  • Love
Reactions: coolelectricity
Dec 4, 2024
37
4
8
I switched to the SX6036, because I needed more 10/40g ports than what the 6610 could provide. I have it paired with a Brocade 6450-24 (with a 2x10g dynamic LACP connection between them), because I still needed a "few" 1gbase-T ports. The combined stack still idles at less than 60w, and now gives me more than enough 1/10/40/56g ports.

And..ConnectX-3 Pro nics are dirt cheap...can run ROCEv2 all day long with that switch.

There's a bit of learning curve to the Mellanox switches (the CLI is slightly different...for the licensing, there's a thread right here on STH that you have to read very very carefully...I mean it, or you'll miss it :) ). I mean, I setup all the VLANs, routes, etc etc and deployed the two switches, and my VLANs wouldn't talk to each other. Had never happened before, and I've been doing L3 switches for a while.

Apparently you actually have to issue a command (ip route) for it to start L3 routing. :) Threw me for a loop. The other thing is the control plane. It's slow...like slow...you get used to it. Kinda.

But they take almost all DACs/Optics as well, power consumption and noise is low and the port density is higher. What's not to like?

Edit: Port breakout possibilities with the SX6036.

View attachment 41263
This is EXACTLY why I asked. GREAT synopsis and insights! I had no idea there were these caveats, but in knowing now, it helps to better plan and consider future options. I don't really NEED 40Gbe, but I was assuming I could use the breakout cables (as in: all 4 ports, not just 1 at a time) for multiple 10Gbe connections to different Machines / Switches in my lab, and not I find myself trying to do mental gymnastics to justify the purchase of the 40Gbe QSFP to 4x 10Gbe SFP+ BOs by finding a switch that fits my situation, without making my Ebills any higher than they already have been (its gotten a bit out of control). But I did the math, and I've only spent about $25-$50 "more than I would have" if I just went with all 10Gbe NICs and DACs instead of making some incorrect assumptions about the config (ah..., to be a n00b again, rite?).

TL;DR: Your argument for pro-Mellanox gear is rather compelling. I'll mull on it for sure! But I think the wheels have already been spinning towards a Brocade 6610 since that fits the physical space situation I'm working with and aligns with "just having less Sh!t plugged into power strips and UPSs". But I did think the exact same about how insanely cheap the ConnectX-3's are right now so I snagged 2 and I also ended up landed a decent deal on a "20/20" ConnectX-4 - all in on Mellanox NICs, cables and transcievers I'm probably still in a ~$200-$250 range. They are pretty freakin sweet! I'm not even scratching the surface of the full powers of them either. Hence why I'm asking the veterans to save myself from further misgivings and performance / CBA insights ;)

Where are "y'all" getting all of your QSFP Breakout Cables from? I find that once I have a "sample that works well" and a supply I can keep going back to, things tend to work out best. Like buying random brands of random quality, lengths, sizes, etc got me where I am today with trying to "reduce". So getting these cables from either a consistent brand or source sounds great.

And have you (or anyone reading) ever tried using a QSFP-to-SFP+ adapter/transciever on a 40Gbe capable Mellanox NIC port?
Any caveats?

Or even more fun, a 40GbeQSFP-to-SFP+ adapter/transciever to an 10Gbe SFP+-to-RJ45 transciever? I have a use case where I considered doing something kind of bonkers like that because I don't want to have to buy a bunch of fiber line, fittings and tools over using flat and/or "Outdoor rated" CAT7+ - But my luck with getting that wild and creative is only about 50/50 (Which I legit think is pretty good :cool: ) but I haven't been in the 10Gbe+ game for long enough to know the ins and outs yet.

I was also curious if I'm weird for trying to slice up my network segments into muliple /20 or /21 CIDRs instead of using 10.x.0.0/16's
- Basically I just am wondering: "How are Y'all doin' your subnetting" on such high speed homelab networks?

And going back to the ConnectX NIC fam, what's the general experience with "QSFP to "Dual" 20Gbe SFP+ Breakout cables? I have not seen many of them around (yet?), but in practice, does it even make sense for most home labs? I ended up getting the one solitary ConnectX-4 most for the physical SFP+ interface already there without dealing with adapters/transcievers etc and the extra VDI powers and better power consumption (supposedly, we will soon find out!). --- But with a homelab situation where there is already a 40Gbe switch in play... going out of the way and buying extra cables, etc etc... does having a few 20Gbe links even make sense in a network that's not exactly getting strained or constricted by local network traffic of <100 devices? What's the use case for 20Gbe segments with Mellanox/Brocade/etc gear (HomeLab or otherwise)? I'm sure there are some, which is probably why the "in between" products created for that >10Gbe to <=56Gbe market exist, but I'm curious if there is a more explicit purpose,other than just "MOARSpEEEdd! on the way up to 100Gbe".
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,362
809
113
I was assuming I could use the breakout cables (as in: all 4 ports, not just 1 at a time) for multiple 10Gbe connections to different Machines / Switches in my lab
Only two of the rear 40gb ports on the 6610 can be broken out into 4x10gb. The other two are fixed at 40gb. So, essentially you can have a max of 16x10gb, 2x40gb and however many 1g ports, depending on your model.

a decent deal on a "20/20" ConnectX-4 -
I have no idea what that is. CX4s are 1/10/25/40/50gb if that's what you meant.

Where are "y'all" getting all of your QSFP Breakout Cables from?
Fleabay. I've averaged ~$12/cable over time. Sometimes a bit more, sometimes a bit less.

have you (or anyone reading) ever tried using a QSFP-to-SFP+ adapter/transciever on a 40Gbe capable Mellanox NIC port?
Yes. Works great.

Or even more fun, a 40GbeQSFP-to-SFP+ adapter/transciever to an 10Gbe SPF+-to-RJ45 transciever?
Yes, works great.

How are Y'all doin' your subnetting
There's no set way. Pick your address space and fire away. I use a 10.10.x.x space.

"QSFP to "Dual" 20Gbe SFP+ Breakout cables?
Never seen those. Doesn't mean they may not exist, but no experience there.

a few 20Gbe links even make sense
Again, CX4s do 1/10/25/40/50gb. Never seen 20gb.
 
Dec 4, 2024
37
4
8
Only two of the rear 40gb ports on the 6610 can be broken out into 4x10gb. The other two are fixed at 40gb. So, essentially you can have a max of 16x10gb, 2x40gb and however many 1g ports, depending on your model.


I have no idea what that is. CX4s are 1/10/25/40/50gb if that's what you meant.


Fleabay. I've averaged ~$12/cable over time. Sometimes a bit more, sometimes a bit less.


Yes. Works great.


Yes, works great.


There's no set way. Pick your address space and fire away. I use a 10.10.x.x space.


Never seen those. Doesn't mean they may not exist, but no experience there.


Again, CX4s do 1/10/25/40/50gb. Never seen 20gb.

I did totally mean 25/25 - I've been messing with an old Emulex this week and slipped. I would attempt to open the can of 100Gbe QSFP to 4x 25Gb SFP+'s next, but I'm not sure how long it will be before I ever get there. I'm still laughing maniacally at my newfound 10Gbe powers. I'm loving that most of my lab is back to being bottlenecked by Disk Speeds (as nature intended).

Also good because I just ordered 2 of them thar fancy 40Gbe QSFP-to-SFP+ transcievers and might have one use case where I add one to a QSFP port on CX3 NIC and the other to the Brocade that's supposedly/hopefully on the way.

My brain is still stewing and marinating how exactly how I'm going to do the "vRouter", but I'm thinking 40Gbe WAN "in from one QSFP DAC" connected to the Switch, and however-many-X 10Gb on LAN/OPT1(?OPT...n?) back "out" as a "Core/Gateway/main/master Router for all things virtualized" and break things up with other vRouters and a mix of Physical and vSwitches that isolate different subnets (and some with a DHCP range, some without). Part of my intent for cleaning things up and reducing complexities in topology was to get back to trying to have at least 2 or 3 proper VLAN segments. I've dabbled with VLANs a few times, but my topology was a trainwreck and I was using at least two (or more) vNICs in my Guest VMs, thinking it was good for redundancy, but ultimately kept me accidentally creating loops that were oft hard to troubleshoot or would only explode intermittently (re when my physical Switches started auto rebalancing which ports and weights for traffic averages). I'm legit hoping to get away from the old Cisco gear I have and try to have less than a handful of network devices to manage, but I guess I'll figure that out soon once the bigger boxes start showing up. I'm not convinced VLANs are a magic bullet for setting up a properly-secured-and-still-functional Management Plane, though.

Are there any particular tools that help you keep things straight in multi segmented and Double/Multi-NATed network setups? Maybe a good jump off point for where to start learninghow Brocade does things like DNS/DHCP/VLANs/VPNs/Firewalls, etc?
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,362
809
113
Brocade does things like DNS/DHCP/VLANs/VPNs/Firewalls
It doesn't do anything of those except DHCP, VLANs, routing (and PBRs, VRFs etc...).

My .02 You're making this more complex than it needs to be. There's a few principles that keep me sane.

1. Never virtualize your firewall, primary storage or standalone router (not the L3 routing we're talking here).
2. You can go wild with VLANs, but keep asking yourself this. Do I really need to segment these devices off?
3. Use Figma/Mural/Miro/Visio whatever, and keep a current network diagram handy. Keyword: current, i.e you make changes to your network, you update your diagram.
4. Script/Automate as much as you can.
5. Look at 1-4 again.

:)
 
  • Like
Reactions: coolelectricity

kapone

Well-Known Member
May 23, 2015
1,362
809
113
p.s. Just to give you an idea...40g..

"Hey Boss..maybe we should have ordered fiber instead of DAC cables..."

IMG_0096 copy.jpg

IMG_0097 copy.jpg

Oh wait...I AM the boss... :(

The 40g DAC cables are thick...and heavy...still got 8 more to go...Those 36 ports on the SX6036 are gonna get a workout...
 

klui

༺༻
Feb 3, 2019
981
576
93
AOCs would be better for that use case. I have some 100G DACs that are that thick and some newer ones are maybe 60% the thickness.