So, I got bored and bought a whole bunch of hardware to try and move to 10GbE. As you can guess, things were a little more complicated than I expected
As a note, I'm a software engineer but definitely not a networking specialist. I haven't really tried to build out a network since college, which was a long time ago. Half the reason I decided to do this was to learn, and I've learned a LOT already, but I've also realized there are so many different ways to set this up, and I'm hoping that someone who is an expert at this would suggest to me how this would be set up if it were a "professional" network (or just how they'd do it themself) so I can try and build it out in a similar way.
The hardware I have to work with:
ICX7250-24P
CheckPoint 13500 (running pfSense, dual 2650 v2, 64gb RAM, 8x1gbe, 4xSFP+)
DS1621+ with 2xSFP+
SuperMicro rack server (running ESXi, another thing I have to learn - dual 2650 v4, 128gb RAM, 2xSFP+)
Initially, I figured I'd just create VLANs on the ICX, plug in pfSense to each VLAN via the 4xSFP+, and then I could run Suricata for IDS/IPS while routing between them at 10GbE! I have learned that clearly is not going to happen
I'll detail what I've tried so far, but if you already know why the above is going to fail you can skip this section, the next I will detail what I'm "trying" to build out.
Bottom line is pfSense is never going to route at 10GbE on a dual 2650v2. I tried multiple different configurations (and even bought a pair of 2630L v2 to measure power consumption differences). I also tried a pair of 2667 v2 in the CheckPoint 13500 but it will not boot with those - I'm not sure why.
The best speed I can get across a single connection is around 5 gbit/s. It pegs one of the cores at 100% trying to push that. The best speed I could get was using 1 interface per VLAN in pfSense. After that, I tried setting up the ICX as a trunk and setup the VLANs in pfSense (I tried it over a single interface, and also set up as a LAGG), and the performance is even worse (around 4 gbit/s). Using the 2630L v2 is even worse, about 1 gbit/s slower in all tests, but the power consumption at idle was about 20 watts lower - and about 40 watts lower when pushing it.
Speaking of pushing it - running Suricata was interesting. In legacy mode, it didn't actually hurt the performance (I guess because it inspects copies of the packets after?). However, turning Suricata on, will max out an entire 3 more CPU cores - for a total of 4 cores running at 100% just to push 4 gbit/s across the network. The best part is, the server will pull a full 104 more watts (more than the entire switch uses!) whenever someone tries to copy a file from the NAS
In inline mode, it can't even route a full gbps. I'm not sure if this is because it's inspecting the same packets as it comes in and goes out, but it stays in the 900 mbit/s range (and the 600 mbit/s range for the 2630L v2). That is also maxing out 4 cores - it can't even handle my ISP bandwidth.
I'm starting to see why I should probably keep the L3 routing on the switch between VLANs, since it can actually route at a full rate without doubling my power consumption. I do have a UPS, but at these levels it won't last very long. I'm actually thinking of pulling half the RAM out of the pfSense box to try and lower power consumption a bit as I don't think there's any reality where it will need 64 GB - even if I run Suricata in legacy mode on traffic leaving to and coming in from the internet.
Therefore, I'm pretty sure I do want to do routing on the ICX7250-24P. I realized it was a great feature when I realized I can't manage the IIPM of the pfSense box if it's the router and it's off
I guess I'm going to have to make extensive use of the ACLs on the ICX7250, since I'd really like to segregate the VLANs and what can talk to what. It's really a pain to do (really wish they had a UI like pfSense), but I played with it a bit and I can make it work.
I assume I want something like
Management VLAN (only LAN can connect)
Server VLAN (only HTTP/S from internet, management from LAN - I run a few web servers, nothing else really)
NAS VLAN (only NFS and SMB from server or LAN, management from LAN)
LAN VLAN
Wireless VLAN (no access to anything except internet)
(If I want to implement a VPN on pfSense in the future? How would that even work? That should probably have it's own rules for what it can access in the VLANs?)
I'm guessing those will have to be defined as inbound rules with the ACLs on the ICX? What about outbound rules to the internet per VLAN, will those still be defined on pfSense?
I'd still like to run some kind of IDS/IPS on pfSense and pfBlockerNG. I'm guessing that means pfSense does have to be the DNS server.
From my understanding, when doing the routing on the ICX, pfSense can still be set up as multiple interfaces across the VLANs, or as a single interface. I've read that it's suggested to be set up as a single interface - but in that case you will need a separate DHCP server since it doesn't support configuring over multiple subnets (not really a problem, I DO have more servers). However, I don't fully understand the reason why it's preferable to configure pfSense as a single interface? I don't quite follow what kinds of issues it could cause. It would be nice to have one easy to view page with all the leases.
I've also heard ESXi can understand VLANs but I haven't really played around with it or understand what benefit that could give. ESXi always complains I should use 2 links for redundancy but I don't think it's applicable to me - I was only planning on wiring 1 SFP+ from the NAS and server to the ICX.
At this point, I'd take any suggestions on how you'd build out a network with these things! I've already experimented a lot, I don't mind experimenting with multiple different approaches. I still have no idea what I'm doing but hopefully I'll learn!
Thanks for any suggestions!
As a note, I'm a software engineer but definitely not a networking specialist. I haven't really tried to build out a network since college, which was a long time ago. Half the reason I decided to do this was to learn, and I've learned a LOT already, but I've also realized there are so many different ways to set this up, and I'm hoping that someone who is an expert at this would suggest to me how this would be set up if it were a "professional" network (or just how they'd do it themself) so I can try and build it out in a similar way.
The hardware I have to work with:
ICX7250-24P
CheckPoint 13500 (running pfSense, dual 2650 v2, 64gb RAM, 8x1gbe, 4xSFP+)
DS1621+ with 2xSFP+
SuperMicro rack server (running ESXi, another thing I have to learn - dual 2650 v4, 128gb RAM, 2xSFP+)
Initially, I figured I'd just create VLANs on the ICX, plug in pfSense to each VLAN via the 4xSFP+, and then I could run Suricata for IDS/IPS while routing between them at 10GbE! I have learned that clearly is not going to happen
I'll detail what I've tried so far, but if you already know why the above is going to fail you can skip this section, the next I will detail what I'm "trying" to build out.
Bottom line is pfSense is never going to route at 10GbE on a dual 2650v2. I tried multiple different configurations (and even bought a pair of 2630L v2 to measure power consumption differences). I also tried a pair of 2667 v2 in the CheckPoint 13500 but it will not boot with those - I'm not sure why.
The best speed I can get across a single connection is around 5 gbit/s. It pegs one of the cores at 100% trying to push that. The best speed I could get was using 1 interface per VLAN in pfSense. After that, I tried setting up the ICX as a trunk and setup the VLANs in pfSense (I tried it over a single interface, and also set up as a LAGG), and the performance is even worse (around 4 gbit/s). Using the 2630L v2 is even worse, about 1 gbit/s slower in all tests, but the power consumption at idle was about 20 watts lower - and about 40 watts lower when pushing it.
Speaking of pushing it - running Suricata was interesting. In legacy mode, it didn't actually hurt the performance (I guess because it inspects copies of the packets after?). However, turning Suricata on, will max out an entire 3 more CPU cores - for a total of 4 cores running at 100% just to push 4 gbit/s across the network. The best part is, the server will pull a full 104 more watts (more than the entire switch uses!) whenever someone tries to copy a file from the NAS
In inline mode, it can't even route a full gbps. I'm not sure if this is because it's inspecting the same packets as it comes in and goes out, but it stays in the 900 mbit/s range (and the 600 mbit/s range for the 2630L v2). That is also maxing out 4 cores - it can't even handle my ISP bandwidth.
I'm starting to see why I should probably keep the L3 routing on the switch between VLANs, since it can actually route at a full rate without doubling my power consumption. I do have a UPS, but at these levels it won't last very long. I'm actually thinking of pulling half the RAM out of the pfSense box to try and lower power consumption a bit as I don't think there's any reality where it will need 64 GB - even if I run Suricata in legacy mode on traffic leaving to and coming in from the internet.
Therefore, I'm pretty sure I do want to do routing on the ICX7250-24P. I realized it was a great feature when I realized I can't manage the IIPM of the pfSense box if it's the router and it's off
I guess I'm going to have to make extensive use of the ACLs on the ICX7250, since I'd really like to segregate the VLANs and what can talk to what. It's really a pain to do (really wish they had a UI like pfSense), but I played with it a bit and I can make it work.
I assume I want something like
Management VLAN (only LAN can connect)
Server VLAN (only HTTP/S from internet, management from LAN - I run a few web servers, nothing else really)
NAS VLAN (only NFS and SMB from server or LAN, management from LAN)
LAN VLAN
Wireless VLAN (no access to anything except internet)
(If I want to implement a VPN on pfSense in the future? How would that even work? That should probably have it's own rules for what it can access in the VLANs?)
I'm guessing those will have to be defined as inbound rules with the ACLs on the ICX? What about outbound rules to the internet per VLAN, will those still be defined on pfSense?
I'd still like to run some kind of IDS/IPS on pfSense and pfBlockerNG. I'm guessing that means pfSense does have to be the DNS server.
From my understanding, when doing the routing on the ICX, pfSense can still be set up as multiple interfaces across the VLANs, or as a single interface. I've read that it's suggested to be set up as a single interface - but in that case you will need a separate DHCP server since it doesn't support configuring over multiple subnets (not really a problem, I DO have more servers). However, I don't fully understand the reason why it's preferable to configure pfSense as a single interface? I don't quite follow what kinds of issues it could cause. It would be nice to have one easy to view page with all the leases.
I've also heard ESXi can understand VLANs but I haven't really played around with it or understand what benefit that could give. ESXi always complains I should use 2 links for redundancy but I don't think it's applicable to me - I was only planning on wiring 1 SFP+ from the NAS and server to the ICX.
At this point, I'd take any suggestions on how you'd build out a network with these things! I've already experimented a lot, I don't mind experimenting with multiple different approaches. I still have no idea what I'm doing but hopefully I'll learn!
Thanks for any suggestions!
Last edited: