How would you recommend I proceed w/ 2 networks, one NIC system?

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
39
Near Seattle
averyfreeman.com
Hey,

So I've got this board I was planning to use mostly as a router/firewall w/ opnsense

It's fancy enough I could put ESXi on it and run a couple other things, but not much - was thinking a couple VMs that idle most the time, like Unifi controller and VCSA

It only has 1 pcie slot (x16). I've got two NICs I could use:

Mellanox CX-3 40GbE w/ 2 ports
82599 (x520) 10GbE w/ 1 port

I run a storage-only subnet @ 40GbE is 172.16.1.0/24
My LAN subnet @ 10GbE is 192.168.1.0/24

I have 3 hosts in addition to this new router box I'm adding

The storage network is connected directly between 3 current machines in a triangle pattern: 1 - 2, 1 - 3, 2 - 3
The LAN uses a switch with 4 10Gb ports, so I have one open port left

Ideally, I'd like to use the 40GbE dual nic and be connected to both networks, but there's two questions I have about how to do it:

1) VCSA assigns IP per uplink, so would I just attach one 40Gb port to the storage network, and one to the LAN?
2) How would I need to change the wiring pattern to incorporate a 4th host to a directly connected network?

The second question is the one that has me most perplexed. I don't ever intended to get more hosts, so I'd rather not have to buy a switch (plus 40Gb switches are expensive and loud af). But I really have no idea what kind of wiring pattern I could do between 4 hosts without a switch - especially one where one host would only have 1 nic for that network.

Thanks
 

Blinky 42

Active Member
Aug 6, 2015
595
216
43
45
PA, USA
Do you have one server acting as storage for the others, or sharing storage on all 4 of them? If you added another 40G port to the server that your new box is going to be hitting most often then it can also work as your fake-switch by adding the new port to the bridge with the existing 2 in that system.
 
  • Like
Reactions: abq

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
39
Near Seattle
averyfreeman.com
Do you have one server acting as storage for the others, or sharing storage on all 4 of them? If you added another 40G port to the server that your new box is going to be hitting most often then it can also work as your fake-switch by adding the new port to the bridge with the existing 2 in that system.
Yeah, that box only has 1 PCIe slot, so it's either 10Gbe on LAN, 40Gbe on storage, but only 1 link, or I could do your recommendation and use 2 ports, but not have 10GbE on LAN.

I think there's some kind of "star" wiring pattern you can do where not all the hosts are connected 1-1 in a 4-host setup, which is kind of what I was trying to get at, but I am not sure how to do it

One thought I had was to run HA copies of VyOS on all the hosts *except* the one with 1 slot, which could forward the 1 link from the crippled router box to the others in a kind of ring pattern

E.g.

Router box single link 172.16.1.10 <--> Host1 172.16.1.11 <-> Host2 172.16.1.12 <-> Host3 172.16.1.13 <-> (??)

I think I am just going to leave it as a router-only box with the Intel 82599 and call it a day. Maybe stick VCSA on it since it idles all the time.

Good thing is, I've got it up to 940Mbps download on comcast residential using OPNSense w/ the Netflix RACK kernel mod (!) ;)
 

Blinky 42

Active Member
Aug 6, 2015
595
216
43
45
PA, USA
If your 10G is DAC you could use a dual port 40G card and a QSFP to SFP+ adapter. The few10Gbit copper SFP+ modules I tried didn't want to play in a QSFP->SFP+ adapter but I didn't test many combinations.
No fully connected topology exists with 4 nodes and only 2 links per node, you need n-1 links for a fully connected mesh.
With only 2 ports per node you would basically need to run a routing protocol and pass traffic through adjacent nodes. The routing can be fixed of course with something like a pattern where traffic prefers one direction around the ring for indirect hops like:
1:2 direct
1:3 1->2->3
1:4 direct
2:3 direct
2:4 2->3->4
2:1 direct
3:4 direct
3:1 3->4->1
3:2 direct
4:1 direct
4:2 4->1->2
4:3 direct

etc. it can scale up to larger rings but you have lots of points of failure and bw competes between all links in the path.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
39
Near Seattle
averyfreeman.com
If your 10G is DAC you could use a dual port 40G card and a QSFP to SFP+ adapter. The few10Gbit copper SFP+ modules I tried didn't want to play in a QSFP->SFP+ adapter but I didn't test many combinations.
No fully connected topology exists with 4 nodes and only 2 links per node, you need n-1 links for a fully connected mesh.
With only 2 ports per node you would basically need to run a routing protocol and pass traffic through adjacent nodes. The routing can be fixed of course with something like a pattern where traffic prefers one direction around the ring for indirect hops like:
1:2 direct
1:3 1->2->3
1:4 direct
2:3 direct
2:4 2->3->4
2:1 direct
3:4 direct
3:1 3->4->1
3:2 direct
4:1 direct
4:2 4->1->2
4:3 direct

etc. it can scale up to larger rings but you have lots of points of failure and bw competes between all links in the path.
Hm. Yeah, while possible, sounds like it's just not a good idea. I'll stick to 3 until I get a switch. It should be fine. Thanks for laying that all out for me.

I'm running into other issues, too. FreeBSD w/ the 82599 is giving me some kind of no carrier bug. The same exact card w/ SPF+ DAC worked fine under ESXi. Paradoxically, I guess I could run ESXi for better driver support. But I'm starting to think I should just keep the system strictly as a router/firewall.

Do you think there'd be any benefit to putting the dual 40GbE card in it and use it as a router for both LAN and storage networks, instead of a VM? Perhaps bridging the ports so packets can go either direction? Or should I just stick to the VyOS VM I have?

The processor in the router box is an E3-1220l v2, 4 core (two HT) I think it's 2.0GHz, 20w TDP - the whole thing idles at about 30w from the wall. So it's not likely to be able to get much above 10Gbps without something like VPP, DPDK, etc.

If I were to do that, I suppose that network would look something like this:

Code:
FW (bridge)<---> host 1   
  ^                ^
  |                |
  v                v
host 3 <-------> host 2