Would appreciate some ICX7250-24P and pfSense advice!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ryanj

New Member
Aug 10, 2023
3
0
1
So, I got bored and bought a whole bunch of hardware to try and move to 10GbE. As you can guess, things were a little more complicated than I expected :)

As a note, I'm a software engineer but definitely not a networking specialist. I haven't really tried to build out a network since college, which was a long time ago. Half the reason I decided to do this was to learn, and I've learned a LOT already, but I've also realized there are so many different ways to set this up, and I'm hoping that someone who is an expert at this would suggest to me how this would be set up if it were a "professional" network (or just how they'd do it themself) so I can try and build it out in a similar way.


The hardware I have to work with:

ICX7250-24P
CheckPoint 13500 (running pfSense, dual 2650 v2, 64gb RAM, 8x1gbe, 4xSFP+)
DS1621+ with 2xSFP+
SuperMicro rack server (running ESXi, another thing I have to learn - dual 2650 v4, 128gb RAM, 2xSFP+)

Initially, I figured I'd just create VLANs on the ICX, plug in pfSense to each VLAN via the 4xSFP+, and then I could run Suricata for IDS/IPS while routing between them at 10GbE! I have learned that clearly is not going to happen :)



I'll detail what I've tried so far, but if you already know why the above is going to fail you can skip this section, the next I will detail what I'm "trying" to build out.

Bottom line is pfSense is never going to route at 10GbE on a dual 2650v2. I tried multiple different configurations (and even bought a pair of 2630L v2 to measure power consumption differences). I also tried a pair of 2667 v2 in the CheckPoint 13500 but it will not boot with those - I'm not sure why.

The best speed I can get across a single connection is around 5 gbit/s. It pegs one of the cores at 100% trying to push that. The best speed I could get was using 1 interface per VLAN in pfSense. After that, I tried setting up the ICX as a trunk and setup the VLANs in pfSense (I tried it over a single interface, and also set up as a LAGG), and the performance is even worse (around 4 gbit/s). Using the 2630L v2 is even worse, about 1 gbit/s slower in all tests, but the power consumption at idle was about 20 watts lower - and about 40 watts lower when pushing it.

Speaking of pushing it - running Suricata was interesting. In legacy mode, it didn't actually hurt the performance (I guess because it inspects copies of the packets after?). However, turning Suricata on, will max out an entire 3 more CPU cores - for a total of 4 cores running at 100% just to push 4 gbit/s across the network. The best part is, the server will pull a full 104 more watts (more than the entire switch uses!) whenever someone tries to copy a file from the NAS :)

In inline mode, it can't even route a full gbps. I'm not sure if this is because it's inspecting the same packets as it comes in and goes out, but it stays in the 900 mbit/s range (and the 600 mbit/s range for the 2630L v2). That is also maxing out 4 cores - it can't even handle my ISP bandwidth.

I'm starting to see why I should probably keep the L3 routing on the switch between VLANs, since it can actually route at a full rate without doubling my power consumption. I do have a UPS, but at these levels it won't last very long. I'm actually thinking of pulling half the RAM out of the pfSense box to try and lower power consumption a bit as I don't think there's any reality where it will need 64 GB - even if I run Suricata in legacy mode on traffic leaving to and coming in from the internet.



Therefore, I'm pretty sure I do want to do routing on the ICX7250-24P. I realized it was a great feature when I realized I can't manage the IIPM of the pfSense box if it's the router and it's off :)

I guess I'm going to have to make extensive use of the ACLs on the ICX7250, since I'd really like to segregate the VLANs and what can talk to what. It's really a pain to do (really wish they had a UI like pfSense), but I played with it a bit and I can make it work.

I assume I want something like
Management VLAN (only LAN can connect)
Server VLAN (only HTTP/S from internet, management from LAN - I run a few web servers, nothing else really)
NAS VLAN (only NFS and SMB from server or LAN, management from LAN)
LAN VLAN
Wireless VLAN (no access to anything except internet)
(If I want to implement a VPN on pfSense in the future? How would that even work? That should probably have it's own rules for what it can access in the VLANs?)

I'm guessing those will have to be defined as inbound rules with the ACLs on the ICX? What about outbound rules to the internet per VLAN, will those still be defined on pfSense?

I'd still like to run some kind of IDS/IPS on pfSense and pfBlockerNG. I'm guessing that means pfSense does have to be the DNS server.

From my understanding, when doing the routing on the ICX, pfSense can still be set up as multiple interfaces across the VLANs, or as a single interface. I've read that it's suggested to be set up as a single interface - but in that case you will need a separate DHCP server since it doesn't support configuring over multiple subnets (not really a problem, I DO have more servers). However, I don't fully understand the reason why it's preferable to configure pfSense as a single interface? I don't quite follow what kinds of issues it could cause. It would be nice to have one easy to view page with all the leases.

I've also heard ESXi can understand VLANs but I haven't really played around with it or understand what benefit that could give. ESXi always complains I should use 2 links for redundancy but I don't think it's applicable to me - I was only planning on wiring 1 SFP+ from the NAS and server to the ICX.



At this point, I'd take any suggestions on how you'd build out a network with these things! I've already experimented a lot, I don't mind experimenting with multiple different approaches. I still have no idea what I'm doing but hopefully I'll learn!

Thanks for any suggestions!
 
Last edited:

ryanj

New Member
Aug 10, 2023
3
0
1
So ... I figured out a way to achieve this, but I have absolutely no idea about the implications of my approach.

I left pfSense in the router-on-a-stick configuration, leaving each VLAN defined in pfSense with an interface on each vlan (10.10.x.1)

Then on the ICX, I read about the concept of VRFs, so I created a VRF for each VLAN, and set the default route for each VLAN to correspond to each of the pfSense interfaces.

This basically gave me something like
LAN-VLAN - LAN-VRF - ve 10.10.30.2 - default route 10.10.30.1 (pfsense)
NAS-VLAN - NAS-VRF - ve 10.10.20.2 - default route 10.10.20.1 (pfsense)
...

Then, I just had to configure inter-VRF routing between the VRFs I wanted to be able to access each other.

For some reason though, it would give me this message every time
Code:
SSH@ICX7250-24P Router(config-vrf-SVR-VRF-ipv4)#ip route 10.10.30.0/24 ve 30
Info: Outgoing interface vrf LAN-VRF does not match static route vrf SVR-VRF
This way, pfSense can configure the DHCP per VLAN, and I just have to overwrite the gateway to be 10.10.x.2

It all SEEMS to work (the ICX will route between VLANs at line speed, and pass to pfSense when it can't, tracert seems to make sense), but I have absolutely no idea what problems this configuration could cause

If anyone has any ideas about this or thinks this is a horrible idea, let me know :)
 

ryanj

New Member
Aug 10, 2023
3
0
1
Well, I finally decided to just set up a transit network like most people suggested, as it seems to be the only sane way to achieve this.
I just put all of the ICX subnets on 10.10.0.0/16 and made pfSense route that through the ICX with one static entry.

I got the above working, and played with policy based routing on both the ICX and pfSense box to try and force packets along paths and to try and fix some of the issues, but I realized there is way too much effort here just to keep pfSense doing the DHCP. I don't really understand the topic well enough to understand the issues I may or may not have with it doing "asymmetric routing" (which it was doing at first, as Wireshark told me the MAC packets were going out on were not the same it was coming back on) therefore I decided to just skip the headache.

For fun, I did decide to put a pair of E5-2673 v2 I got for cheap in the firewall, to measure the performance. It was almost able to push 7 gbit/s through pfSense, at a cost of almost 270 watts :confused:. Realizing that running something like Suricata inline (not even half of that) is never going to get high speeds no matter what CPU is used, I just went back to the 2630L v2 and pulled out half the RAM leaving only 32 GB. This will idle just under 130 watts ... but can still push almost 4 gbit/s with legacy inspection mode. I think legacy is "fine" to inspect traffic leaving the network, for detecting issues. If I really want, I can put a pfSense VM in front of my servers doing inline inspection of data before it hits the servers. I need a frontend anyway, as I need to run something like HAproxy or nginx to forward http requests based on domain/subdomain. In that case, it's limited to the ~50mbit/s of traffic the ISP gives lol.

I decided to just put the DHCP server on the ICX, which made me decide to upgrade the ICX to 09.0.10. I'm just going to hope the DHCP server in 9.x.x works better. I've spent sooooo much time with this thing I've learnt the command line anyway. I wouldn't mind spinning up a VM for DHCP, but I'd like the VM hypervisor to get it's address from DHCP in the first place - and also, in the event of a power outage, the VM server is one of the first things I'd want to shut off to conserve battery.

I think using PBR with the ACLs will let me set up just about anything imaginable, and be able to force certain routes (like VLAN30->VLAN20 port 80) to end up being routed through pfSense so I can control which rules I apply on the ICX and which rules I apply on pfSense. I think this ultimately will allow me to configure everything exactly how I want it :)