This is my first post, but I've stumbled upon some absolute gems of info here and thought I'd give posting a nuanced question try. I'm getting a feel for the ettique here, so apologies and corrections are welcome.
Since I have some nuances to my home lab network (don't we all?) I'm going to do a TL;DR here, and then get into the gory details for my specific circumstances, below.
TL;DR version:
Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology? (Such as using 2 or more instances of pfSense with the 40Gb b/o cable segment used in each WAN of the individual software routers)
I will try to give enough detail and backstory for my stated goals, below:
I keep returning to this post and comment in my research for how this might be possible. I find it an interesting idea for how to potentially make QSFP to 4x SFP+ breakout cables "act" as 4 different connections.
I had a few thoughts on possibilities to get this to work, also, but the folks in the STH forum seem far more knowledgeable with Linux Networking than I am. I am not committed to the one or two possibilities for "Getting it to work" that I have in mind, I just want to try to streamline and reduce *some* complexity that had accumulated over the last few years of learning more about "commerical grade" hardware and networking.
In the linked comment and post above, from a few years ago, I was interpretting what was being suggesting as "4 virtualized NICs for each port" - similar to the concepts from this 2020 StackExchange question here: How is it possible to get multiple virtual network interfaces with only one physical network interface?
I envisioned the configuration for getting something like this to work from the "hosting network machine (Which may or may not be virtualized)" looking something like this: (From Creating multiple virtual adapters over a single physical NIC)
```
#echo /etc/network/interfaces
auto eth0:1
iface eth0:1 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
auto eth0:2
iface eth0:2 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
auto eth0:3
iface eth0:3 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
```
Is that "where it was going" with that comment/thread? Or am I way off base?
My 10Gb Network goals do not currently include 1+Gb Internet, since that's just NOT an option available for my location (but I wish it were!). So the 10Gb Network is more for faster backups, better RDP and VNC quality, faster/easier Migration of VMs.
So I will reiterate that my primary goal is to replace the ridiculous number or Routers and Switches I've been collecting over the last 5+ years to solve different problems, and streamline my entire network topology into 3 to 5 subnets with as few routers and switches as possible
As long as each PC/Server that is 10Gb capable has at least one connection to a 10Gb switch (or perhaps directly connected 10Gb ports between PCs, where it might make sense), I will consider it "done". And, if we ever manage to get 2.5Gb+ Internet available in my area, I'll only need to change one connection.
At one point before I started to get rid of some things, I counted up all my mostly wireless but some wired-only routers that I had collected over time and it was "over the count of 10". The count of switches that I had was actually even higher. To be fair, I did have more PCs back then, but still, the amount of "stuff" in my homelab was starting to look like the dorm rooms of Larry Page and Sergey Brin. I don't need all that anymore (did I ever?), since I've backedup and migrated a lot of the hardware-centric tasks to VMs.
I would like to continue this trend with moving a lot of the remanants of 1Gb networking to just one wireless router and one 1Gb switch, ditching all the crazy DUAL WAN configurations, BONDed, LACP, etc connections and simplifying to fewer, faster and more efficient subnets.
I plan to use what I have dubbed as a "Router Box" - just a moderately powered mATX PC - to replace / upgrade / simplify the network topology (at least more simple that it is today, anyway). I am already using the "Router Box" with a Mellanox ConnectX-3 40Gb QSFP and the 4x 10Gb SFP+ breakout cable, but with only 1 of the 4 connectors. It works fine like that, even though I know these cables are designed for 40Gb Mellanox Switches and not the ConnectX-3 Adapters... but...
The above-mentioned comment and thread got me thinking about what the best Bare Metal configuration is for "The Router Box".Some thoughts and Ideas I'm trying to get any one of you to talk me out of:
- Ubuntu/Debian + VMWare Workstation
-- maybe Debian or Fedora with libvirt/virtsh instead?
- Maybe just install ProxMox Xen/XCP-ng with a few pfSense (or w/e software router) instances in VMs to provide (up to) 4x 10Gb subnets for the network?
I current have 3 "cheap offbrand switches" that are 10Gb capable, with a total of 4x 10Gb SFP+ ports open (6 minus the 2 ports consumed by the connection of 2 switches that are already connected to each other). I *COULD* break down and buy a somewhat pricey 8 (or more) 10Gb SFP+ port Managed Switch, instead of using 2 unmanged and one managed, all 3 with 4x 2.5Gb (RJ45) + 2x 10Gb (SFP+) configurations. But then I would have to run additional CAT7 lines through the "home that is becoming a lab, instead" and that is the sort of thing I'm trying to avoid, hence having a single 10Gb line in/out of the rooms is most ideal.
I really only need at least 2 or 3 of the subnets to be 10Gb capable and the rest at 2.5Gb or 1Gb would be fine. But if I could sort out 3 or more, that would be amazing.
I have a 2x 25Gb SFP+ port Mellanox ConnectX-4 card for the "big server", which sits in the same room as "The Router Box", so I'm thinking maybe 1 or 2 direct connections from the 4x SFP+ b/o cable with 2x 10Gb subnets,. If there is a way to make it work, I would surely eventually attempt some kind of BOND or LACP for 20Gb, I'm sure, but I would need to "feel like 10Gb is too slow", first.
I already have a 2nd 40Gb 2 x port Mellanox ConnectX-3 with a QSFP to QSFP cable for a "NAS Box", too - I consolidated most of my storage into a single rig, for reasons mentioned above.
But I haven't yet experimented with the 40Gb QSFP to 40Gb QSFP a lot because it's tedious to physically access the "server shed" (outside - to keep the noise down). I'm waiting to experiment and test if I can get 4 connections from the QSFP to x4 SFP+ b/o cable to decide whether buying a Mellanox 40Gb Switch that is ethernet-capable is the better route to take.
So... sorry for the length all the details, but they felt relevant as I see a lot of "why don't you just..." or "why are you doing it that way" responses of other parts of the internet. Hopefully there is enough context there to understand there are multiple rooms involved, each with unique challenges.
I post to ask the experts for some feedback and/or better ideas on how to achieve these goals.
I currently have 2 thoughts on this possibly working without (buying more and more networking hardware... again...)
Thought #1:
Run Linux on Bare Metal for "The Router Box", using the onboard NIC for "Internet Access" and the 1 QSFP port with the 4x SFP+ breakout cable having individually assigned CIDRs; something like:
- 10.1.0.1/16
- 10.2.0.1/16
- 10.3.0.1/16
- 10.4.0.1/16
This guide does ok at interpretting what I was thinking for the "Linux Router configuration": Configure Linux as a Router (IP Forwarding)
The upside here for "Thought 1" is that it leaves me with possibilities to natively configure stuff like DNS, NFS, a collection of Docker/Containers, adding multiple different Hypervisors like libvirt/virtsh/vmm, Vbox, VMware Workstation, etc etc until I run out of CPU or RAM resources, which I can easily "partition" or slice between hypervisors and the guest VMs.
--------
Even though there are benefits to having "natively installed software services" on Linux, managing Firewalls with Bare Metal Linux networking kind of sucks and its a lot to "keep straight" in memory, especially after awhile. I haven't found a great tool for keeping track of Network Rules with an easy interface for making tweaks and changes, like those baked-in for pfSense or OpenWRT/DD-WRT, etc.
So I am leaning more towards "Thought #2":
Run a Hypervisor on Bare Metal (most likely ProxMox) and configure multiple Guest VM instances of pfSense/Software Routers and "virtually" wire the single QSPF port with the 4x SFP+ breakout cable connected as the WAN, each pfSense VM instance having different CIDRs and connecting to different SFP+ ports of physical 10Gb Switches OR perhaps to directly connected to other machines in the home lab with NICs that have open SFP+ ports.
The downside to Thought #2, is that I have to either use a lot of 10Gb switches and sort out how to interconnect each subnet (or not), but IMO opinion that is a better downside than having to manage all of the network rules without a decent GUI interface.
But this is also a major assumption that it will actually even work with the ConnectX-3 breakout cables.
My "Big Ask" is:
Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology?
The internet is also AWESOME at telling people why they are wrong. So I am playing to this strength by testing my own weaknesses and trying to get someone to talk me out of doing either idea by explaining why it won't work, too.
I'm lowkey hoping that someone else has already found a slick and awesome method to using all 4x SFP connections for one of these breakout cables on the ConnectX-3 that doesn't require I buy a Managed 40Gb Mellanox Switch. I'm open to that possibility, too (on the cheap), assuming there is a decent GUI to manage the interfaces, or a visual tool of some sort that can help make sense out of the current firewall/port configurations set from a CLI. But from what I can tell, in only a few hours of research, dealing with "Mellanox licenses" for Web GUI management tools for somewhat obscure and EOL-unsupported networking devices doesn't seem like a great experience.
Any help or insights would be greatly appreciated!
Since I have some nuances to my home lab network (don't we all?) I'm going to do a TL;DR here, and then get into the gory details for my specific circumstances, below.
TL;DR version:
Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology? (Such as using 2 or more instances of pfSense with the 40Gb b/o cable segment used in each WAN of the individual software routers)
I will try to give enough detail and backstory for my stated goals, below:
I keep returning to this post and comment in my research for how this might be possible. I find it an interesting idea for how to potentially make QSFP to 4x SFP+ breakout cables "act" as 4 different connections.
I had a few thoughts on possibilities to get this to work, also, but the folks in the STH forum seem far more knowledgeable with Linux Networking than I am. I am not committed to the one or two possibilities for "Getting it to work" that I have in mind, I just want to try to streamline and reduce *some* complexity that had accumulated over the last few years of learning more about "commerical grade" hardware and networking.
In the linked comment and post above, from a few years ago, I was interpretting what was being suggesting as "4 virtualized NICs for each port" - similar to the concepts from this 2020 StackExchange question here: How is it possible to get multiple virtual network interfaces with only one physical network interface?
I envisioned the configuration for getting something like this to work from the "hosting network machine (Which may or may not be virtualized)" looking something like this: (From Creating multiple virtual adapters over a single physical NIC)
```
#echo /etc/network/interfaces
auto eth0:1
iface eth0:1 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
auto eth0:2
iface eth0:2 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
auto eth0:3
iface eth0:3 inet static
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.0
network xxx.xxx.xxx.0
broadcast xxx.xxx.xxx.xxx
```
Is that "where it was going" with that comment/thread? Or am I way off base?
My 10Gb Network goals do not currently include 1+Gb Internet, since that's just NOT an option available for my location (but I wish it were!). So the 10Gb Network is more for faster backups, better RDP and VNC quality, faster/easier Migration of VMs.
So I will reiterate that my primary goal is to replace the ridiculous number or Routers and Switches I've been collecting over the last 5+ years to solve different problems, and streamline my entire network topology into 3 to 5 subnets with as few routers and switches as possible
As long as each PC/Server that is 10Gb capable has at least one connection to a 10Gb switch (or perhaps directly connected 10Gb ports between PCs, where it might make sense), I will consider it "done". And, if we ever manage to get 2.5Gb+ Internet available in my area, I'll only need to change one connection.
At one point before I started to get rid of some things, I counted up all my mostly wireless but some wired-only routers that I had collected over time and it was "over the count of 10". The count of switches that I had was actually even higher. To be fair, I did have more PCs back then, but still, the amount of "stuff" in my homelab was starting to look like the dorm rooms of Larry Page and Sergey Brin. I don't need all that anymore (did I ever?), since I've backedup and migrated a lot of the hardware-centric tasks to VMs.
I would like to continue this trend with moving a lot of the remanants of 1Gb networking to just one wireless router and one 1Gb switch, ditching all the crazy DUAL WAN configurations, BONDed, LACP, etc connections and simplifying to fewer, faster and more efficient subnets.
I plan to use what I have dubbed as a "Router Box" - just a moderately powered mATX PC - to replace / upgrade / simplify the network topology (at least more simple that it is today, anyway). I am already using the "Router Box" with a Mellanox ConnectX-3 40Gb QSFP and the 4x 10Gb SFP+ breakout cable, but with only 1 of the 4 connectors. It works fine like that, even though I know these cables are designed for 40Gb Mellanox Switches and not the ConnectX-3 Adapters... but...
The above-mentioned comment and thread got me thinking about what the best Bare Metal configuration is for "The Router Box".Some thoughts and Ideas I'm trying to get any one of you to talk me out of:
- Ubuntu/Debian + VMWare Workstation
-- maybe Debian or Fedora with libvirt/virtsh instead?
- Maybe just install ProxMox Xen/XCP-ng with a few pfSense (or w/e software router) instances in VMs to provide (up to) 4x 10Gb subnets for the network?
I current have 3 "cheap offbrand switches" that are 10Gb capable, with a total of 4x 10Gb SFP+ ports open (6 minus the 2 ports consumed by the connection of 2 switches that are already connected to each other). I *COULD* break down and buy a somewhat pricey 8 (or more) 10Gb SFP+ port Managed Switch, instead of using 2 unmanged and one managed, all 3 with 4x 2.5Gb (RJ45) + 2x 10Gb (SFP+) configurations. But then I would have to run additional CAT7 lines through the "home that is becoming a lab, instead" and that is the sort of thing I'm trying to avoid, hence having a single 10Gb line in/out of the rooms is most ideal.
I really only need at least 2 or 3 of the subnets to be 10Gb capable and the rest at 2.5Gb or 1Gb would be fine. But if I could sort out 3 or more, that would be amazing.
I have a 2x 25Gb SFP+ port Mellanox ConnectX-4 card for the "big server", which sits in the same room as "The Router Box", so I'm thinking maybe 1 or 2 direct connections from the 4x SFP+ b/o cable with 2x 10Gb subnets,. If there is a way to make it work, I would surely eventually attempt some kind of BOND or LACP for 20Gb, I'm sure, but I would need to "feel like 10Gb is too slow", first.
I already have a 2nd 40Gb 2 x port Mellanox ConnectX-3 with a QSFP to QSFP cable for a "NAS Box", too - I consolidated most of my storage into a single rig, for reasons mentioned above.
But I haven't yet experimented with the 40Gb QSFP to 40Gb QSFP a lot because it's tedious to physically access the "server shed" (outside - to keep the noise down). I'm waiting to experiment and test if I can get 4 connections from the QSFP to x4 SFP+ b/o cable to decide whether buying a Mellanox 40Gb Switch that is ethernet-capable is the better route to take.
So... sorry for the length all the details, but they felt relevant as I see a lot of "why don't you just..." or "why are you doing it that way" responses of other parts of the internet. Hopefully there is enough context there to understand there are multiple rooms involved, each with unique challenges.
I post to ask the experts for some feedback and/or better ideas on how to achieve these goals.
I currently have 2 thoughts on this possibly working without (buying more and more networking hardware... again...)
Thought #1:
Run Linux on Bare Metal for "The Router Box", using the onboard NIC for "Internet Access" and the 1 QSFP port with the 4x SFP+ breakout cable having individually assigned CIDRs; something like:
- 10.1.0.1/16
- 10.2.0.1/16
- 10.3.0.1/16
- 10.4.0.1/16
This guide does ok at interpretting what I was thinking for the "Linux Router configuration": Configure Linux as a Router (IP Forwarding)
The upside here for "Thought 1" is that it leaves me with possibilities to natively configure stuff like DNS, NFS, a collection of Docker/Containers, adding multiple different Hypervisors like libvirt/virtsh/vmm, Vbox, VMware Workstation, etc etc until I run out of CPU or RAM resources, which I can easily "partition" or slice between hypervisors and the guest VMs.
--------
Even though there are benefits to having "natively installed software services" on Linux, managing Firewalls with Bare Metal Linux networking kind of sucks and its a lot to "keep straight" in memory, especially after awhile. I haven't found a great tool for keeping track of Network Rules with an easy interface for making tweaks and changes, like those baked-in for pfSense or OpenWRT/DD-WRT, etc.
So I am leaning more towards "Thought #2":
Run a Hypervisor on Bare Metal (most likely ProxMox) and configure multiple Guest VM instances of pfSense/Software Routers and "virtually" wire the single QSPF port with the 4x SFP+ breakout cable connected as the WAN, each pfSense VM instance having different CIDRs and connecting to different SFP+ ports of physical 10Gb Switches OR perhaps to directly connected to other machines in the home lab with NICs that have open SFP+ ports.
The downside to Thought #2, is that I have to either use a lot of 10Gb switches and sort out how to interconnect each subnet (or not), but IMO opinion that is a better downside than having to manage all of the network rules without a decent GUI interface.
But this is also a major assumption that it will actually even work with the ConnectX-3 breakout cables.
My "Big Ask" is:
Will the ConnectX-3 QSFP to 4x SFP+ breakout cable connected to the single QSFP port allow for a networking configuration to correctly assign traffic to 2 or more physically connected ports, even if they DO have different CIDRs and "spoofed MACs" from the pfSense WAN configuration so that they are unique to the rest of the segments on the network topology?
The internet is also AWESOME at telling people why they are wrong. So I am playing to this strength by testing my own weaknesses and trying to get someone to talk me out of doing either idea by explaining why it won't work, too.
I'm lowkey hoping that someone else has already found a slick and awesome method to using all 4x SFP connections for one of these breakout cables on the ConnectX-3 that doesn't require I buy a Managed 40Gb Mellanox Switch. I'm open to that possibility, too (on the cheap), assuming there is a decent GUI to manage the interfaces, or a visual tool of some sort that can help make sense out of the current firewall/port configurations set from a CLI. But from what I can tell, in only a few hours of research, dealing with "Mellanox licenses" for Web GUI management tools for somewhat obscure and EOL-unsupported networking devices doesn't seem like a great experience.
Any help or insights would be greatly appreciated!