pfSense / 10Gbe Networking Help

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

aidenpryde

New Member
Apr 30, 2020
27
1
3
Hello,

I've always ever had a single wireless router in my environment, but as I get into servers that start to serve larger amounts of data and have insecure Internet of Things devices in my network, I think it's time to get into VLAN capable switches and a pfSense firewall.

I don't think I currently need assistance with pfSense hardware, but will probably combine a Dell Optiplex 5040 (i3), an Intel Quad 1Gbe NIC, and some sort of 10Gbe dual port card for trunking up to the firewall.

However, what is really stumping me is what's downstream from the pfSense firewall, namely VLAN capable switches, and would really appreciate any advice.

Here's what I'm trying to do in a small home office (so noise is a concern):

4 VLANs:
VLAN 10 = 1Gbps devices (Apple TV, etc)
VLAN 20 = 10Gbps (Server and 2 gaming computers)
VLAN 30 = 1Gbps IoT devices (Smart TV, etc)
VLAN 40 = IPMI / other management

I'll need a bare minimum of FOUR 10Gbps connections (1 server, 2 gaming computers, and 1 trunk connection to the firewall).

I've heard conflicting information about what hardware I should purchase to accomplish this.

OPTION 1: Some say I should buy a managed Level 3 switch that's capable of full 10Gbps routing, but these are expensive, noisy, or don't have enough 10Gbe ports, so there's no unicorn switch for me to choose from. So, with this solution, all the management of the VLANs would be done on the switch itself (I guess if you have a suggestion here that would work too).

OPTION 2: Another person suggested that I use pfSense to manage the VLANs, and this option seems to be the cheapest, as I could buy cheap switches to accomplish this I think. I could get this Mikrotik 4-port 10Gbe switch along with even a simple smart switch like this Mikrotik 5-port 1Gbe smart switch. Is this feasible?

It's been kinda hard to find someone that has done something like this so any help would be appreciated!

Thank you!

EDIT: Should have mentioned for clarity. The setup will be in a home office for the time being and will need to be quiet. Bonus points if the switch has a GUI if you think you have a managed switch in mind.
 
Last edited:

Jason Antes

Active Member
Feb 28, 2020
226
76
28
Twin Cities
I have an opnSense firewall that I'm upgrading to 10Gb, dual port, and a single 1Gb port for IoT stuff. I have a ICX6610 that I am using for my needs. 8x 10Gb ports on the front, another 8x via breakout cables from 2x 40Gb ports, and 48x 1Gb ports (I honestly think I could get away with a 24 port but this is what I got). It's definitely louder than my older 1Gb only switches but I wouldn't hear it outside a well ventilated network closet. I also have a vdx6740-48F; that one is quieter but it doesn't do PoE+ like my other one so I think I'll wind up selling it. I only use the layer 2 functionality of either switch but I do separate out ports by VLAN with the switch instead of doing it via the firewall. I've just gone the route of having a physical interface per network instead.
 

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
So, currently in your proposed environment at home you are using your firewall as a router for your internal network. The way you have it setup right now is such that if anything on VLAN 20 is communicating with something on VLAN 10 or VLAN 30 they have to "hair-pin" through the firewall to get there. This limits the traffic to the 1 gig link between the firewall and your switch, and consumes most of the bandwidth for all other devices trying to get out to the internet on that VLAN.

pfSense also cannot really do 10 gigabit routing without using fairly powerful hardware. Netgate, the developers of pfSense more-or-less abandoned pfSense for that purpose and developed an entirely new and different product called TNSR. You can certainly TRY it, but I don't think you would have much success.

My opinion? Take a look at the Brocade thread here and pick up a used ICX switch. Alternatively, take a look at buying a managed HPE layer 3 switch such as this one:

There is a reason why large organizations DO NOT use their firewalls to do high-speed routing. Switches do it in dedicated hardware, not in software on the CPU. Dedicated, purpose built hardware will ALWAYS be faster.
 
  • Like
Reactions: BoredSysadmin

tsteine

Active Member
May 15, 2019
177
84
28
So, currently in your proposed environment at home you are using your firewall as a router for your internal network. The way you have it setup right now is such that if anything on VLAN 20 is communicating with something on VLAN 10 or VLAN 30 they have to "hair-pin" through the firewall to get there. This limits the traffic to the 1 gig link between the firewall and your switch, and consumes most of the bandwidth for all other devices trying to get out to the internet on that VLAN.

pfSense also cannot really do 10 gigabit routing without using fairly powerful hardware. Netgate, the developers of pfSense more-or-less abandoned pfSense for that purpose and developed an entirely new and different product called TNSR. You can certainly TRY it, but I don't think you would have much success.
This is the reason I use TNSR with a 40gbit NIC in my homelab, It's far more comfortable to simply manage all my firewalling on the TNSR box instead of routing on my switches and having to manage ACLs on the switches.
 

ArmedAviator

Member
May 16, 2020
91
56
18
Kansas
For your needs, get an ICX645-24 port. It's cheap on eBay, reasonably quiet, has 4x 10Gbit SFP+ ports (with licensing, and a local user here has spares he can contribute), and will do the routing of VLANs for you, and pinholed with ACLs. ACLs have a learning curve, as I have discovered, but it's really not too hard. I have a large ACL list for my 7 VLAN home network. It is blazingly faster than what my pfSense server did with even dual 10Gbit ports. It was hardcore CPU bound and it's no slouch either. Now pfSense does all ancillary network needs (DNS, DHCP, PIA VPN client, VPN server, RADIUS, Squid cache proxy) while the ICX switch (in my case ICX6610) does the wirespeed routing. It also allows my VLANs to be accessible when pfSense is down (less DHCP and DNS concerns).
 

blinkenlights

Active Member
May 24, 2019
157
67
28
This is the reason I use TNSR with a 40gbit NIC in my homelab, It's far more comfortable to simply manage all my firewalling on the TNSR box instead of routing on my switches and having to manage ACLs on the switches.
TNSR makes sense if you are utilizing 40 GigE and current data center standards (25/50/100). Where I have concern is the lack of a hobbyist/self-supported community tier like Netgate has today with pfSense. I am not saying it should be free, but 10 GigE is accessible for home users and becoming common - it should not cost a home user $$$$ per year just to firewall VLANs on 10 GigE physical interfaces. VyOS sort of does the same thing, but at least you can get the "rolling release" images for free.
 

blinkenlights

Active Member
May 24, 2019
157
67
28
Adding a link to the TNSR review by @tsteine on the Netgate forums: TNSR for my homelab.

It is a good write-up, and explains (to me at least) why you are willing to spend $$$$ per year for a TNSR license. When you are self-hosting development environments and cloud storage in support of your day job as a software engineer, the cost becomes part of your business expenses. Even though this is your "homelab" it is not really a "homelab" - at least, not how I think of one ;)
 

tsteine

Active Member
May 15, 2019
177
84
28
If I'm going to be entirely truthful here, the amount of use it sees for actual work for my employer is almost non-existent. It's damn convenient to have cloud-storage available on internet connected servers when you need some files transferred and installed on that server though, so there is that.

The use it sees for development is for hobbyist/personal projects, testing stuff, developing/hosting a discord bot, etc. Once every blue moon while I'm working, I might think "I want to try out X" then quickly fire it up remotely on my home rack, but that is about the extent of what work use it sees.
 
  • Like
Reactions: blinkenlights

blinkenlights

Active Member
May 24, 2019
157
67
28
If I'm going to be entirely truthful here, the amount of use it sees for actual work for my employer is almost non-existent. It's damn convenient to have cloud-storage available on internet connected servers when you need some files transferred and installed on that server though, so there is that.

The use it sees for development is for hobbyist/personal projects, testing stuff, developing/hosting a discord bot, etc. Once every blue moon while I'm working, I might think "I want to try out X" then quickly fire it up remotely on my home rack, but that is about the extent of what work use it sees.
Alternative explanation #5: for the lulz! :D

No, it makes perfect sense. I am guilty of the same thing with hardware. The gap between "needs" and "wants to play around with" is fairly large, but I am willing to pay for the experience.
 
  • Like
Reactions: tsteine

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,066
440
83
Alternative explanation #5: for the lulz! :D

No, it makes perfect sense. I am guilty of the same thing with hardware. The gap between "needs" and "wants to play around with" is fairly large, but I am willing to pay for the experience.
Before joining this community I never fathomed that 10g in home labs is SO widespread around these parts and there are plenty of (NEED) use cases for 10gig in home labs and large capacity LINUX ISOs storage systems. In my case, my home-lab small 3 node HCI runs miles better on 10gig.
What still puzzling to me is why in the world, anything faster - 25/40 or even 100gig would be NEEDED at home, except for "WANTED" case.
What is also still shocking to me how relatively cheap a 40gig hardware is quickly becoming. My own datacenter MAY go to 25gig later this year and even that doesn't seem like any sort of show-stopping even from major HCI vendors design teams.
 

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
...What still puzzling to me is why in the world, anything faster - 25/40 or even 100gig would be NEEDED at home, except for "WANTED" case..
Change your definition of "NEED". An economist would define "NEED" as a "WANT" that you have the means to satisfy. e.g., do I "need" a big suburban house in order to stay alive? No. But within the current economy I definitely "need" it...

Also, the best reason to justify my goofy home lab boils down to "because I can".
 

tsteine

Active Member
May 15, 2019
177
84
28
@BoredSysadmin The reason why my rack has 100gig is only "because I can". I'm not sure I could come up with a valid argument for 10gbe even, other than wanting faster file transfers, and maybe quick vMotions.

Now, since I'm going to go back to college(net based while working) and get a bachelors degree in Applied Data Science, for school purposes I *am* considering getting a Quadro RTX GPU for AI deep learning and GPU DIrect. If I do that, it would probably be the only legitimate case for actually requiring greater than 10GbE in the home rack for me.
 

blinkenlights

Active Member
May 24, 2019
157
67
28
e.g., do I "need" a big suburban house in order to stay alive? No. But within the current economy I definitely "need" it...
I will go back on topic in a sec - just wanted to say yes, absolutely! I am working, but can hardly imagine the strife between my kids (teenagers), my kids and my wife, my wife and the pets, etc. if we were in a small house. Each person (and animal) has their own private corner when boredom elicits annoyance :rolleyes:

So as far as "why" 10 GigE, my answer is twofold: 1) I cannot pass up a great deal and 2) my file server has to split that 10 Gbps (+/- 5%) at least four ways, sometimes even more, when people are streaming stored content. For me, the upgrade is not so much about 10 Gbps point to point as it is 10 Gbps split into multiple 1 Gbps (+/- 5%) streams on a single physical interface.

I used the two dual-port cards left over from an upgrade in my firewall and run all of them at full speed. Why? Mostly because I found the power utilization difference between 1 and 10 was only 5-8 watts per card, but maybe a little just because I can ;)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
Two things...

1. To the OP I'm with you. I've been looking for a similar solution but it seems that pfsense alone is not going to cut it for routing at 10Gbps and I have not found a quiet 10Gb L3 switch myself.

2. @tsteine your post on the Netgate forums was enlightening as I find myself having/wanting much of the same to what you have. However, I have not seen any information anywhere about TNSR pricing for 40Gb routing. I imagine it's in the thousands? Regardless I'm very interested in how you have your network setup. Happen to have a network diagram or the like that you could help to illustrate with?
 

TXAG26

Active Member
Aug 2, 2016
412
126
43
I too did some searching and could not find anything about self-hosted on my own hardware TNSR pricing. I'm specifically looking for the price breakouts between 1 Gbps/10 Gbps/25 Gbps.
 

tsteine

Active Member
May 15, 2019
177
84
28
@IamSpartacus
This is not 100% accurate as there are more devices and switches, but it gives a general idea of what it looks like.

As for pricing, my suggestion would be to contact Netgate and ask for a quote where you list your needs/wants and see what they can offer you.

1591209557523.png
 

blinkenlights

Active Member
May 24, 2019
157
67
28
1. To the OP I'm with you. I've been looking for a similar solution but it seems that pfsense alone is not going to cut it for routing at 10Gbps and I have not found a quiet 10Gb L3 switch myself.
@IamSpartacus I have had a similar discussions on other threads, but never showed the performance test results from my home firewall. This seems like as good a thread as any. Key hardware: Supermicro X10SRM, Intel E5-2667v4 (8-core), 2x Chelsio T520-BT through a Brocade ICX7450-48 with 12x 10 GigE ports via modules. Memory and storage do not come into play here but, yes, they are sufficient size and fast. Firewall rules include blocking over 200,000 IPv4 CIDR entries based on reputation lists.

Below is an iperf3 test from the wired VLAN (192.168.0.0/24) to the wireless VLAN (192.168.1.0/24) while downloading a CentOS image from an external mirror to a system on the LAN at 350-380 Mbps sustained -

Code:
[2.5.0-DEVELOPMENT][root@fw0]/root: iperf3 -B 192.168.1.1 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.0.2, port 12185
[  5] local 192.168.1.1 port 5201 connected to 192.168.0.2 port 53632
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.01 GBytes  8.67 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   3.00-4.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   5.00-6.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   6.00-7.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   7.00-8.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   8.00-9.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.87 Gbits/sec
[  5]  10.00-10.11  sec   135 MBytes  9.88 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.11  sec  11.5 GBytes  9.75 Gbits/sec                  receiver
I could run it for a longer period of time, but the throughput does not change significantly. System load showed 0.47 with iperf3 at 57% on one core.

And here are the loader.conf variables - I have "the usual" sysctl tuning for 10 GigE like larger rx/tx buffers -
Code:
autoboot_delay="3"
hw.cxgbe.config_file = "flash"
hw.cxgbe.fcoecaps_allowed = "0"
hw.cxgbe.iscsicaps_allowed = "0"
hw.cxgbe.nm_rx_ndesc = "4096"
hw.cxgbe.nofldrxq = "8"
hw.cxgbe.nofldtxq = "8"
hw.cxgbe.nrxq = "8"
hw.cxgbe.ntxq = "8"
hw.cxgbe.qsize_rxq = "4096"
hw.cxgbe.qsize_txq = "4096"
hw.cxgbe.rdmacaps_allowed = "0"
hw.cxgbe.toecaps_allowed = "0"
hw.usb.no_pf="1"
if_lagg_load = "YES"
if_opensolaris_load = "YES"
ipmi_load = "YES"
kern.cam.boot_delay = "5000"
kern.geom.label.disk_ident.enable = "0"
kern.geom.label.gptid.enable = "0"
kern.hz = "100"
kern.ipc.nmbclusters = "524288"
kern.ipc.nmbjumbo9 = "524288"
kern.ipc.nmbjumbop = "524288"
kern.timecounter.hardware = "HPET"
legal.intel_ipw.license_ack = "1"
legal.intel_iwi.license_ack = "1"
net.inet.tcp.hostcache.cachelimit = "0"
net.inet.tcp.soreceive_stream = "1"
net.inet.tcp.tso = "1"
net.isr.bindthreads = "1"
net.isr.defaultqlimit = "4096"
net.isr.maxthreads = "-1"
net.link.ifqmaxlen = "16384"
net.pf.request_maxcount="1048576"
t5nex_load = "YES"
zfs_load = "YES"
The sharp-eyed among you will notice I am breaking a lot of Netgate's standard guidance for pfSense tuning, but it works. I am using 9k jumbo packets wherever possible.

Hope that helps! :)
 
  • Like
Reactions: nedimzukic2

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
What throughput can you sustain with smaller packets (1500 MTU or smaller packets)? Imix would be interesting but there is no easy way to do it with iPerf.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
@IamSpartacus
This is not 100% accurate as there are more devices and switches, but it gives a general idea of what it looks like.

As for pricing, my suggestion would be to contact Netgate and ask for a quote where you list your needs/wants and see what they can offer you.

View attachment 14385
Thank you for this diagram, very helpful. Curious though as to why you trunk your DMZ VLAN eveywhere. What kinds of services do you have on your DMZ network?


@IamSpartacus I have had a similar discussions on other threads, but never showed the performance test results from my home firewall. This seems like as good a thread as any. Key hardware: Supermicro X10SRM, Intel E5-2667v4 (8-core), 2x Chelsio T520-BT through a Brocade ICX7450-48 with 12x 10 GigE ports via modules. Memory and storage do not come into play here but, yes, they are sufficient size and fast. Firewall rules include blocking over 200,000 IPv4 CIDR entries based on reputation lists.

Below is an iperf3 test from the wired VLAN (192.168.0.0/24) to the wireless VLAN (192.168.1.0/24) while downloading a CentOS image from an external mirror to a system on the LAN at 350-380 Mbps sustained -

Code:
[2.5.0-DEVELOPMENT][root@fw0]/root: iperf3 -B 192.168.1.1 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.0.2, port 12185
[  5] local 192.168.1.1 port 5201 connected to 192.168.0.2 port 53632
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.01 GBytes  8.67 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   3.00-4.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   5.00-6.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   6.00-7.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   7.00-8.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   8.00-9.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.87 Gbits/sec
[  5]  10.00-10.11  sec   135 MBytes  9.88 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.11  sec  11.5 GBytes  9.75 Gbits/sec                  receiver
I could run it for a longer period of time, but the throughput does not change significantly. System load showed 0.47 with iperf3 at 57% on one core.

And here are the loader.conf variables - I have "the usual" sysctl tuning for 10 GigE like larger rx/tx buffers -
Code:
autoboot_delay="3"
hw.cxgbe.config_file = "flash"
hw.cxgbe.fcoecaps_allowed = "0"
hw.cxgbe.iscsicaps_allowed = "0"
hw.cxgbe.nm_rx_ndesc = "4096"
hw.cxgbe.nofldrxq = "8"
hw.cxgbe.nofldtxq = "8"
hw.cxgbe.nrxq = "8"
hw.cxgbe.ntxq = "8"
hw.cxgbe.qsize_rxq = "4096"
hw.cxgbe.qsize_txq = "4096"
hw.cxgbe.rdmacaps_allowed = "0"
hw.cxgbe.toecaps_allowed = "0"
hw.usb.no_pf="1"
if_lagg_load = "YES"
if_opensolaris_load = "YES"
ipmi_load = "YES"
kern.cam.boot_delay = "5000"
kern.geom.label.disk_ident.enable = "0"
kern.geom.label.gptid.enable = "0"
kern.hz = "100"
kern.ipc.nmbclusters = "524288"
kern.ipc.nmbjumbo9 = "524288"
kern.ipc.nmbjumbop = "524288"
kern.timecounter.hardware = "HPET"
legal.intel_ipw.license_ack = "1"
legal.intel_iwi.license_ack = "1"
net.inet.tcp.hostcache.cachelimit = "0"
net.inet.tcp.soreceive_stream = "1"
net.inet.tcp.tso = "1"
net.isr.bindthreads = "1"
net.isr.defaultqlimit = "4096"
net.isr.maxthreads = "-1"
net.link.ifqmaxlen = "16384"
net.pf.request_maxcount="1048576"
t5nex_load = "YES"
zfs_load = "YES"
The sharp-eyed among you will notice I am breaking a lot of Netgate's standard guidance for pfSense tuning, but it works. I am using 9k jumbo packets wherever possible.

Hope that helps! :)
I'm not clear is your pfsense box doing your inter-vlan routing on the Brocade ICX7450-48?