Hey folks, I'm looking to build out a pfsense box with one of these cwwk units. Primary goal is to have a drop-in replacement for my off-the-shelf router. Bonus if I can run other infra apps (pihole, home assistant, etc.) alongside to create an "all-in-one" infra box. Trying to figure out whether to get a 2 nic, 4 nic, or larger unit. The following is how I think I understand how each should be used:
2 NIC
Best to be used as a physical firewall. eth0 as LAN, eth1 as WAN. Virtualization can't really happen efficiently because if both NICs were passed through to pfsense, then there's no NIC for proxmox. The only way I think virtualization could happen is if you don't use PCIE passthrough for LAN and set up a virtual bridge where you attach virtual interfaces for each of the VMs, but that would mean all LAN traffic destined for either a VM, WAN, or cross-VLAN would have to be CPU processed by that virtual bridge. However, all that traffic described would have to be CPU processed anyways; so is there really a performance hit? Only thing I can think of is that LAN<->WAN and cross-VLAN traffic gets CPU processed twice; once at the virtual bridge in proxmox, and once in pfsense, and that double processing can't be done at line rate. Am I right in assuming that virtualizing a 2 NIC device would be a bad idea?
4 NIC
Best for a virtualized firewall. eth0 as LAN, eth1 as WAN, using PCIE pass through. eth2 as a virtual bridge for all the other proxmox VMs, and eth3 as a static IP for proxmox management. This is what I'm leaning towards, but wanted to make sure my understanding was right.
There would be a physical cable going from eth0 to a switch, and then from the switch back to eth2 to connect the VMs to the LAN. By doing this, pfsense still gets full hardware control over the NICs it manages, and the only traffic pfsense processes is WAN<->LAN traffic. The only traffic in/out of eth2 should be broadcast or traffic to/from the VMs. From what I can tell, the eth2 traffic is still processed via CPU, but that's going to have to happen to be able to route to the correct VM anyways. I'm not sure whether the eth3 management is strictly necessary or just a "good idea" since ideally I'd like to be able to manage proxmox via the eth2 interface.
Ideally, I'd be able to use eth3 (and even eth2) as additional LAN ports as I've got LAN devices that would sit next to this box. However, I think that means not doing PCIE passthrough on the eth0 and creating a proxmox virtual bridge as described in the 2 NIC setup, which causes the double CPU processing mentioned above. Threads I've read state that its better to have an external switch that has hardware to switch traffic and use that. So I also need to get a 5 port managed 2.5Gbe switch since I want to use VLANs, but those seem a lot less common and a lot more pricey.
6+ NIC
This is where I'm at a loss. As mentioned in the 4 NIC case, ideally I'd be able to use additional NICs as additional LAN ports. In the case of a 6+ NIC case, I could pass through eth4 and eth5 to pfsense and create a virutal bridge in pfsense rather than in proxmox. This avoids double-processing all LAN traffic, but all traffic on eth4 and eth5 would still need to be CPU processed. So I'm back to getting an external switch to increase the number of LAN ports, which would leave eth4 and eth5 unused.
What do people use these devices with 6+ NICs for? Separate subnets seems like a reason, but I can't imagine that's a common use-case given the ease of VLANs vs running additional cabling. Link aggregation seems like another use-case, but with 2.5Gbe, seems it would be hard to saturate even a single link. Am I missing something?