Network Architecture Advise

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mikebru10

New Member
Aug 30, 2022
1
0
1
I am trying to build out my home network, my goals are high availability and resiliency. I plan to run a Docker Swarm, one docker host on each of my VM servers and one in the DigitalOcean cloud and point all my traffic through a Nginx Proxy Manager on my Docker Swarm. My issue is I have a ton of high end hardware but I have questions about how to best configure everything. I started with 3 EXSi hosts and vCenter, but licensing became very expensive so I am slowly decommissioning and migrating to Proxmox VE hosts.

My question is how can I configure my network on all the devices below to maximize the hardware, specifically how should I configure the Nics on the Proxmox hosts so ensure the most effeicent, resilient, and robust network both for the vm hosts but also the servers they will eventually host. I tried to configure a Vlan and dedicate a 10 Gb link on each ESXi host to vCenter replication and another one to vSan traffic, but then I didn't have a 10 Gb link for VM traffic. The Nics in the Proxmox are my primary concern, but also how to configure the NICs in the TrueNas so I can use all the bandwidth and not just run all my traffic over a single 1Gb link. The NAS and SAN should have NFS & SMB available to PCs on the lan as well as the servers on the Promox machines.

1661895884374.png
The VM servers listed in diagram all have the same connections as the one server illustrated, but to simplify the drawing I only drew out one server.

My Hardware
3x Supermicro SYS-E200-8D (currently 1x ESXi hosts and 2 Proxmox hosts)
each 1x 1TB NVMe (main storage)
128GB ECC DDR4
2x 1Gb Nics - Plugged into #2 Cisco 2960
2x 10Gb Nics - plugged into Nexus
1x IPMI (Management interface)

SAN - TrueNas Scale 22.x
1x Supermicro 36 Drive enclosure
8x 4TB drives - RaidZ2
8x 3TB drives - RaidZ2
64GB ECC DDR4
2x 1Gb Nics - Plugged into #2 Cisco 2960
4x 10Gb Nics - 2x RJ45s & 2x TwinAX - Plugged into Nexus

NAS
1x QNAP TS-853A
8x 8TB drives - Raid 6
4x 1Gb Nics - Plugged into #2 Cisco 2960

I use my SAN and NAS to host both NFS, SMB and iSCSi.

My Network
1x PfSense - 1x 1Gb Wan port & 4x 1Gb Lan ports
2x Cisco 3750X 48 port 1Gb PoE switches (core switches)
1x Cisco Nexus 5548 48 Port 10Gb Switch (Primarily switching for San to VM hosts)
4x Cisco 2960 24 port 1Gb
3x Aruba 335 Instant-APs

I have 2x multi-mode 10Gb fibers ran from my 2x 3750Xs to the Nexus 5548 as my trunk, I can't use FCoE because I don't have a license for it on my Nexus.

I use PfSense as my firewall and network router, I have VLAN setup for Guest, Lan, Gaming, Smart Home, etc.

Any assistance anyone could give would be great.
 

zunder1990

Active Member
Nov 15, 2012
209
71
28
boy I hope your power is cheap, We use alot of those cisco 3750x poe at work. They idle like 300 watts each.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
It's not a bad setup for homelab, but if your goal is resiliency, consider for each component what happens when it fails.

When a drive fails, zfs keeps you going until you get a replacement drive. When one of the VM hosts dies, its workload will get rescheduled on another node by swarm. What happens when either one of the NASes (TNS or QNAP) fails, due to backplane, HBA, motherboard, PSU, etc.? What about when a switch fails? Or power, or network uplink. You don't have to have redundancy for everything; it's a spectrum.

To the question of replication vs vSAN traffic on the 10GbE links, one way is to bond the two 10GbE links using LACP for failover, then use VLANs on top of the bond to segregate replication from vSAN traffic. Another way is to dedicate one link to each task, sacrificing resilience in case of link failure (mostly just cable failure or getting unplugged, since no stacked 10GbE switch here).