Hardware / Network Layout Recommendations, Emphasis on Redundancy

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tinfever

New Member
Nov 3, 2018
8
4
3
Hi all,

I host a couple of servers as a side business and it has become apparent I need to stop kicking the can down the road and really get the networking done right. I'm looking for recommendations on hardware for OPNsense, switch recommendations, and general network design for redundancy.

tl;dr: I've attached a diagram of my current failing configuration and what I'm considering for an upgraded config. My goal is to have as much redundancy as possible without spending too much money. Budget might be $2000 for all routers and switches, and even that feels high because I'm a terrible cheapskate. Does anyone have any recommendations for cost-effective and reliable hardware for the OPNsense routers?

OPNsense Router Wish list:
  • ECC RAM
  • Redundant PSUs
  • Future expandability for up to 10G uplink (I'm willing to accept that OPNsense itself may be a bottleneck at 5Gbps+ routing)
  • Low power consumption.

I'd also appreciate recommendations on the switches. Wish list:
  • Ability to work with existing 1G NICs all the way up to future 40G+ NICs
  • Redundant PSUs
  • MC-LAG capable
  • Ability to update firmware on one switch without taking down network (this might rule out stacking)
I'm also curious to hear your opinions on network design for redundancy. MC-LAG sounds perfect but it seems like only expensive switches support it. I've seen so many acronyms thrown around that I'm feeling in a bit over my head. I found a Reddit thread where someone says the only way that makes sense nowadays is to run ESI-LAG with Anycast-GW and EVPN-VXLAN, and that's about when I started questioning the meaning of life.

#end tl;dr

Network Map.PNG

Expanding on the above, currently I just have all of the servers connected to a Cisco C3560E switch with the IPMI connections on a separate VLAN. The switch then connects to the OPNsense firewall/router which then connects to the ISP. Lately the OPNsense router has been acting up and freezing intermittently, and a few weeks ago the Cisco switch failed leaving me in a bind. Needless to say there is room for improvement.

I'm willing to consider almost any solution at this point. I've penciled in using OPNsense in a redundant configuration for the router/firewall. I'm open to alternatives but I'm not sure there is anything better in my price range. I'm also fairly competent with OPNsense/pfSense, whereas I'd have to relearn anything else.

It sounds like the more "enterprise" way to design the network would be to have a separate router and firewall, and then move all of the other services like DHCP, DNS, and Wireguard VPN to some other system. I'm not sure I'd see much benefit to this though and it'd be more expensive. Here's a theoretical question: When is an all-in-one solution like OPNsense not enough? When you need to push over 10Gbps through it? When you have more than X number of devices?

Router hardware options I've come up with:
  • Asrock X470D4U motherboard with SC815TQ-R700UBchassis with redundant PSUs
    • Ryzen 3 2200G PRO CPU (or similar), DDR4 ECC UDIMM
    • Would have to be a custom build, probably wouldn't be very clean, reliability of motherboard is questionable?
    • UIO chassis may be a problem
  • SuperMicro SYS-5019C-FL- ~$500 barebones
    • i3-9100F (or similar), DDR4 ECC UDIMM
    • Cheap but doesn't have redundant PSUs
  • SuperMicro SYS-5019S-MR- ~$1000 barebones
    • Xeon E3-1270 V6 (or similar), DDR4 ECC UDIMM
    • Probably too expensive, somewhat dated platform?
  • SuperMicro AS-1013S-MTR
    • EPYC Naples/Rome (Whatever model I can get cheapest), DDR4 ECC RDIMM
    • Expensive, Probably has high idle power consumption
  • SuperMicro X9SRH-7F Motherboard with SC815TQ-R700UBchassis (or similar) - $250 barebones
    • Xeon E5-2650 v2, DDR3 ECC RDIMM
    • Custom build that might not be straightforward.
    • I already have the CPUs and RAM so it would be very cheap.
    • Probably has high idle power consumption.

Currently the ISP handoff is 1000BASE-T but could be upgraded to 2.5GBASE-T or 10GBASE-T in the near future so I'd like to be able to support all of those. This wouldn't be too hard by just using an 10GBASE-T NIC on the OPNsense server. Since there is only a single connection from the ISP, I think I'm also going to need a switch to split the connection to both routers. I haven't put much thought in to that, other than it'd be nice if it was unmanaged to reduce attack surface, NBASE-T/10GBASE-T to support the needed connections, and had redundant PSUs because I'm paranoid. I'm not sure such as thing exists (or is affordable) though. This could be anything from the cheapest consumer 10GBASE-T switch I can find (with no redundant PSU), or even a 2.5GBASE-T switch would be enough (I could even hack in redundant PSUs), or I could use an old Cisco switch that has a few 10G ports on it to get redundant PSUs, but with hardware that old, would it be less reliable that the consumer switch?

As for the switches going to the servers, this is where it gets messy, particularly with the redundancy requirement.
Options I've thought of:
  • 2x old Cisco 1GbE switches - $100-150 each
    • Then I could just upgrade to better switches when I actually need them.
    • Doesn't support MC-LAG so I'd have to use Linux bonding with load balancing
    • Linux bonding with load balancing shouldn't require any switch support. Might not failover as fast as MC-LAG?
  • 2x Brocade ICX6610 - $250-300
    • Good mix of 1GbE for existing servers and IPMI connections with 16x 10GbE for future expansion
    • Doesn't support MC-LAG so I'd have to use Linux bonding or something more exotic? (Wasn't planning on stacking the switches so they could be updated without taking everything down)
  • 2x Mellanox SX6036 - $250-300
    • Lots of connectivity for 40GbE
    • Could be some hidden drawback to using Ethernet-over-Infiniband for everything?
    • Would cost a fortune to try to adapt any 1GbE connections
    • Not sure if this support MC-LAG
  • 2x Brocade ICX6650 - $550-600
    • Does support MC-LAG
    • Tons of connectivity
    • No cost effective way to connect 1GbE devices
It sounds like MC-LAG would be ideal but it isn't available on less expensive switches. That leaves a whole other world of options that I only partially understand. Maybe I don't even need a layer 2 network and should just use layer 3 switches and then the servers can just using some routing protocol to route around a failed switch?

To wrap all this up, I'm currently thinking it might make the most sense to go with OPNsense on the SuperMicro X9SRH-7F hardware option and accept the higher power usage, then go with the ICX6610 for the switches since they aren't that much more expensive than whatever old Cisco thing I could buy is.

I'm really curious to hear any feedback! Thank you! (Especially if you read this entire thing!)
 

koifish59

Member
Sep 30, 2020
66
19
8
You should be able to achieve this for a lot less than $2k if you're willing to buy used enterprise gear. OPNsense isn't too demanding for hardware. But just make sure your ISP provides at least 3 IP addresses for OPNsense high availability since you're using CARP.

A pair of stacked ICX6610 is good for your current setup with some expansion for faster network in the future, and they're cheap enough to ditch in case your network outgrows the switch later.

That unmanaged switch to split IP addresses is the single point of failure. Might as well plug the WAN directly into a switch and use vLAN instead, and have an untagged port ready on the other switch for the WAN in case the first switch ever dies.
 
  • Like
Reactions: Stephan

Vesalius

Active Member
Nov 25, 2019
252
190
43
Would require learning, but VyOS would be a competent alternative to opnsense, that would ultimately allow for more throughput if you need it. Uses VRRP for HA.

Doing the internal routing at layer3 on your Managed switch1/2 would also take the internal network load off the edge firewall and allow for line speed there. Stacking a couple ICX6610 would allow you to redundantly LACP across both.
 

Stephan

Well-Known Member
Apr 21, 2017
920
697
93
Germany
Like koifish59 suggests, WAN needs to go into redundant switch stack into a separate VLAN. Ready to be plugged into different switch in case switch, switch port or transceiver dies. No lightning protection needed there because its fiber. Otherwise a reliable "victim" switch as cold standby(s) would be appropriate. Mellanox can do MLAG, otherwise would need to go above that and use Aristas for (hitless) failover.

Magic words to look out for here on STH and elsewhere for Mellanox are rot13+5(z7y5a%5k4) and for Arista rot13+5(freivpr hafhccbegrq-genafprvire RZP 122541p2).

Mellanox should also link at 56(FDR cable)/40/10/1 GBps on each port, 10/1 with the help of an adapter to go from QSFP28 to SFP+. Edit: Items of interest here are HP 655874-B21 (original Mellanox, just with a HP number) and with these I use Cisco GLC-T V03 (version 03) 30-1410-03 which will give you 1000Base-T.

Could use a ProxMox with a cluster of 3, maybe shared-nothing Ceph storage for the firewall VM, so only gear necessary are two switches and three servers.

In addition I like to use any old box for backup in case the construction goes down, i.e. when s hits the fan plug fiber into it, wifi ap already in it, routing in it, DHCP etc. So if everything goes down, I can plug that box into the fiber and have internet to look for troubleshooting tips, or order parts, or have a website saying "welp... we're down. brb."
 
Last edited:

tinfever

New Member
Nov 3, 2018
8
4
3
Thanks for all the feedback!

Since my ISP handoff is literally a single RJ-45 jack, I don't think there is anyway to avoid a single point of failure, right?
Options:
  1. Single WAN switch (I'm leaning towards the MicroTik CRS305-1G-4S+IN because it can take dual PSUs)
    1. Point of failure: WAN switch
    2. Resolution: Swap WAN switch with cold spare
  2. Separate WAN switch stack
    1. Point of failure: The switch the WAN cable is connected to
    2. Resolution: move WAN cable to other switch in stack
  3. Isolated VLAN on ICX6610 switch stack
    1. Point of failure: The switch the WAN cable is connected to
    2. Resolution: move WAN cable to other switch in stack
In this case, I think it just comes down to what hardware is more likely to fail and the potential downside of adding an additional failure mode. If a new MicroTik CRS305 is less likely to fail than a used Brocade ICX6610, then it would make sense to add the MicroTik switch so a failure of an ICX6610 would be seamless. If the CRS305 is more likely to fail, then I should leave it out since a failure of the CRS305 would require manual intervention anyways. Hmm... perhaps option three wouldn't be so bad. I could keep an extra ICX6610 as a cold spare and then be able to handle any type of switch failure. Only one type of system/software to learn too.

Going the other direction, there is no real way to have redundancy on the server IPMI connections on the LAN side, is there?

I am liking the idea of the ICX6610 switches. The only downside is that they don't support MLAG. Would the only benefit of MLAG over LACP-to-stacked-switches be the ability to upgrade the firmware without downtime and slightly better stability? With the upgrading firmware not being a big problem because the ICX6610 hasn't had an update in two years.

I won't lie, the Mellanox SX6036 is awfully tempting for "future proofing". Thanks Stephan for the tips on the adapter part numbers to look for. It looks like if I really tried to haggle a deal I could adapt the QSFP+ to 1000GBASE-T for $20 per port but if I need 15 ports, that adds up fast.

Something that's also occurred to me is that if I need to support a 2.5GBASE-T handoff from the ISP, in order to avoid speed mismatch issues I may have to either:
  1. Use the specific/expensive ($200) Aquantia AQS-107 SFP+ to NBASE-T transceiver to go on the WAN connections to the switch stack (Option 3 above). I'll also have to buy two for redundancy/spare.
  2. Use a standalone WAN switch (option 1) that is known for handling these speed mismatches well
It also sounds like I may have to enable flow control on the ICX6610 either way? Since the WAN SFP+ transceiver has to tell the SFP+ side it is running at 10G but tell the 2.5GBASE-T side it is running at 2.5G, I don't see how else the ICX6610's would know to not keep sending 10G of data. Unless normally something like Explicit Congestion Notification handles this? I'm out of my wheelhouse on this one.

Would require learning, but VyOS would be a competent alternative to opnsense, that would ultimately allow for more throughput if you need it. Uses VRRP for HA.

Doing the internal routing at layer3 on your Managed switch1/2 would also take the internal network load off the edge firewall and allow for line speed there. Stacking a couple ICX6610 would allow you to redundantly LACP across both.
VyOS is interesting. I may have to set that up at home and try it out. Currently I have effectively no internal network routing but if I add a Ceph cluster in the future, I could see where you'd want to handle that routing on the switch. It sounds like the ICX6610's could take care of that.

Magic words to look out for here on STH and elsewhere for Mellanox are rot13+5(z7y5a%5k4) and for Arista rot13+5(freivpr hafhccbegrq-genafprvire RZP 122541p2).
I'm afraid I don't understand these magic words.

Could use a ProxMox with a cluster of 3, maybe shared-nothing Ceph storage for the firewall VM, so only gear necessary are two switches and three servers.
What is the advantage of doing this? Or would it be using the existing servers to host the firewall VM? Even if I had three separate servers in a Proxmox cluster for the firewall, how would the physical wiring be connected? Let's say each server has NICs A and B. Would NIC A on each server connect to the untagged WAN VLAN on the switch stack, and then NIC B would all be trunks back to the "LAN" side on the switch stack? Then two firewall VMs would run in HA mode?
 

Stephan

Well-Known Member
Apr 21, 2017
920
697
93
Germany
Magic words: You go on ROT13 @ http://www.rot13.de/ and also select ROT5. Enter everything between brackets... result is fodder for search on STH and elsewhere. Don't want to copy/paste this in plaintext.

Yes, ProxMox HA also for firewall, using a special "WAN" VLAN. Exit for that is your RJ45 uplink out of a switch port. Every host has multiple VLANs on its link(s), one is for WAN. Other VMs do not see it because not assigned. You would only need one firewall VM, because if the host dies, cluster manager will restart the VM elsewhere and connectivity would be restored. That's why I suggested Ceph to replicate at least the firewall VM on all three nodes. Ceph is an IOPS killer but firewall should not write that much. So even cheap SATA disks with power-loss-protection will be ok.
 

koifish59

Member
Sep 30, 2020
66
19
8
  1. Single WAN switch (I'm leaning towards the MicroTik CRS305-1G-4S+INbecause it can take dual PSUs)
    1. Point of failure: WAN switch
    2. Resolution: Swap WAN switch with cold spare
I would not depend on this microtik switch if it's running in a crucial environment for your business. It's popular because it's the cheapest 10gbe switch available, but it's known to have issues from the get-go. I would trust a used ICX6610 to last longer than that new microtik.

Option 3 would be the best. Use double 6610 stacked, and run different vlan to separate your WAN and LAN on the same switch.


What is the advantage of doing this? Or would it be using the existing servers to host the firewall VM? Even if I had three separate servers in a Proxmox cluster for the firewall, how would the physical wiring be connected? Let's say each server has NICs A and B. Would NIC A on each server connect to the untagged WAN VLAN on the switch stack, and then NIC B would all be trunks back to the "LAN" side on the switch stack? Then two firewall VMs would run in HA mode?
If you virtualize your firewall, then NICs A and B can either be active/standby or active/active for redundancy, but your WAN and LAN can run through the same cable, again separated by different vlans, rather than the WAN and LAN having their own dedicated NICs. Basically same as what Stephan said.

But alternatively, instead of depending on the proxmox cluster to do the failover, can you sent up HA for the firewall VM using CARP. I believe this is more of a seamless failover compared to cluster-level failover, where it would have to boot up the firewall on the other host.
 

tinfever

New Member
Nov 3, 2018
8
4
3
Magic words: You go on ROT13 @ http://www.rot13.de/ and also select ROT5. Enter everything between brackets... result is fodder for search on STH and elsewhere. Don't want to copy/paste this in plaintext.
Thank you for spelling it out for me. I haven't heard of rot13 before. Now I know! I understood the Arista text but I'm afraid I haven't been able to decipher the Mellanox text.

Option 3 would be the best. Use double 6610 stacked, and run different vlan to separate your WAN and LAN on the same switch.
That makes sense. I'm probably going to run with this.

If you virtualize your firewall, then NICs A and B can either be active/standby or active/active for redundancy, but your WAN and LAN can run through the same cable, again separated by different vlans, rather than the WAN and LAN having their own dedicated NICs. Basically same as what Stephan said.
Ah, I hadn't thought of just trunking both VLANs to the servers on the same connection.

Just so I'm clear, the point of virtualizing the firewall would be to use my existing servers to run the firewall as well as the guest VMs, instead of having separate firewall servers? Like this?

Network Diagram.png

Unfortunately I can't run the firewall VMs from the main servers because they are sometimes delivered as bare metal to the client.

I could run three additional separate servers to host the firewall VM(s) but then I feel like the benefit might not be worth the increase in complexity. I suppose it could be useful if I had additional services I wanted to run like logging/monitoring though.