pfSense build help (future guide)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AnIgnorantPerson

New Member
Mar 31, 2019
14
0
1
Alright, so I am starting to research the dive into a pf sense build. I assume I will do custom via spare parts and or new PC. I figure custom will be cheaper/better overall.

I will need 3x 10GbE and ~2x1GbE
10GbE to server
10GbE to Desktop
10GbE to 1GbE switch
1xGbE to fiber modem (future could be 10GbE)

or

I will need 2x 10GbE and 5-10x1GbE
10GbE to server
10GbE to Desktop
5-10x 1GbE directly to all other parts of the house (modem, PC, TV, wifi AP).

I have 2 options to repurpose

Option 1
The real question does ECC matter and does this MB has enough slots for the network requirements

Intel Core i7 3770 @ 3.40GHz
32.0GB Dual-Channel DDR3 @ 663MHz (9-9-9-24)
MSI B75A-G43 (MS-7758) (SOCKET 0)
PCI Express 3.0 x16
1 x PCI Express 3.0 x16
PCI Express 2.0 x16
1 @x4
PCI Express x1
2 x PCI Express x1
PCI Slots
3 x PCI Slots


Option 2
Repurpose a TS140
E3-1200 v3
16-32GB of 2133 ECC (IIRC)
online says:
  • Slot 1: PCIe 3.0 x16 (x16-wired); full-height, half-length
  • Slot 2: PCIe 2.0 x1 (x1-wired); full-height, half-length
  • Slot 3: PCIe 2.0 x16 (x4-wired); full-height, half-length
  • Slot 4: PCI 32 bit / 33 MHz (5 V); full-height, half-length


I would prefer to repurpose the IB 3770 since its the slower build. But I am asking to make sure I make the right choices obviously.

For network adapters via PCIe. I would just go used or generic to save money. All my current 10GbE are used Intels from ebay on the cheap.

Intel(R) Ethernet Server Adapter X520-1


Any thoughts, input, and so on is always appreciated.
 

BlueLineSwinger

Active Member
Mar 11, 2013
181
71
28
Your proposed topologies appear needlessly convoluted. Better, I believe, would be a simple box with two 1 Gb NICs. The LAN side would be connected to switch with ~4 10 Gb ports (for the server, etc.) and however many 1 Gb ports you feel will cover your current and future needs. Unless you're planning to put the desktop and server on separate subnets and route them through pfSense somehow (i.e., you're not going to be using a layer-3 switch), you probably don't need any 10 Gb NICs on it.

Which means that either box you intend to repurpose will be massively overpowered for basic routing and firewall duty. Though you could also load up other apps on it, such as Snort or a VPN server.
 
  • Like
Reactions: Dreece

Dreece

Active Member
Jan 22, 2019
503
161
43
@BlueLineSwinger do note the OPs username LOL

I would go along with your plan @AnIgnorantPerson, but only because tinkering is fun, it will all work no matter which option you consider.

However, I feel option 2 is better, and yes ECC makes a difference, it is the difference between an odd crash out of the blue and not, usually driven by a corruption which then continues to fester in memory until requiring a reboot. You want reliability, ECC and high quality enterprise components such as Xeon CPUs and server class motherboards is the way to go, and not just saying that because of bragging rights, it's purely a fact which many of us have learned the hard way over countless years. The argument thrown back is that pfSense doesnt really push the hardware to its knees so it is a non-issue, but then it really does boil down to the hardware you're using, one man's luck may not be another man's luck... whereas server components no luck required, they're designed to be consistent in reliability no matter what.

Regarding Nics, nothing wrong with the intels but you won't get any enhanced cpu offloads with the 5 series, even the 710s are offload limited, the 722 is the way to go with intels but they come at a premium. I currently use Chelsio T5s with iwarp rdma, they're superb cards when *SUPPORTED*, and a royal pain in the backside when not supported, to that regard Mellanox are a better buy and even Solarflare openload cards are solid but these are not supported as heavily as the intels and mellanoxes. But all of the aforementioned may be mute if pfSense doesnt fully support the cards offloading features, so again you would have to research that.

All in all you know what you're doing and only your budget is in charge of how convulsed a setup you desire, but BlueLineSwinger is dead right about applying the KISS discipline here, sometimes it is better to build it up as you go along rather than going all in and then banging your head for wasting your time and money on something you really didnt need as in overkill or if you just didnt get the research done up front and ended up buying a nic or two which had support limitations cough *chelsio*.

Bottom line, stick with server parts in my opinion, but there are many who run pfsense on desktop/thinclient parts, but when you look at their custom builds you will notice that they tend to use very efficient cpus teeny weeny little thinclient setups which tend to be routinely used in light industry too so have well known reliability records in that regard. Please note a typical desktop motherboard is often the weakest link, especially those where companies try to cram every feature into something at an attractive price, somethings going to give, usually quality.

Oh and if you do go the mini server route, you could potentially consider KVM and running pfsense as a virtual machine, that way you then get to have a few more VMs on there which you can play with to extend your pfSense further without being limited by the pfSense kernel. My 2 cents. :)
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I recently bought a machine for pfSense. It doesn't have to be super expensive and can route at 990Mbps, which is as fast as I could ask for with the included 1Gb ports. The machine is an older server from ebay, I bought it mostly for the rack mount chassis, but the internals basically came for free and were good enough. An older setup, but still a decent machine with 24G ECC. I added a SSD I had sitting around for a boot drive, and it was up and running.

Supermicro 1026T-M3F 1U Server X8DTL-3F 1x E5620 24GB RAM w/ 1PSU & Rails | eBay

The CPU is a bit on the older side, but it has AES and is plenty fast. It did not include the drive caddies, I do wish people would include those. Seller took $75, and the shipping was worth it, they packed it extremely well.

IMO, older server gear with ECC is the way to go for this sort of thing. More reliable and designed to run 24/7. That said, I'm fortunate in not needing to care about noise, so the high speed tiny fans don't bother me. They live away from the people where the noise doesn't bother anyone.

I wouldn't use the machine as a switch. Get a switch and use VLANs if you want isolation. A nice 4x10Gbe + 48x1Gbe are available for about $100. Cheaper than all the NICs you are looking at. :)
 

AnIgnorantPerson

New Member
Mar 31, 2019
14
0
1
@BlueLineSwinger do note the OPs username LOL

I would go along with your plan @AnIgnorantPerson, but only because tinkering is fun, it will all work no matter which option you consider.

However, I feel option 2 is better, and yes ECC makes a difference, it is the difference between an odd crash out of the blue and not, usually driven by a corruption which then continues to fester in memory until requiring a reboot. You want reliability, ECC and high quality enterprise components such as Xeon CPUs and server class motherboards is the way to go, and not just saying that because of bragging rights, it's purely a fact which many of us have learned the hard way over countless years. The argument thrown back is that pfSense doesnt really push the hardware to its knees so it is a non-issue, but then it really does boil down to the hardware you're using, one man's luck may not be another man's luck... whereas server components no luck required, they're designed to be consistent in reliability no matter what.

Regarding Nics, nothing wrong with the intels but you won't get any enhanced cpu offloads with the 5 series, even the 710s are offload limited, the 722 is the way to go with intels but they come at a premium. I currently use Chelsio T5s with iwarp rdma, they're superb cards when *SUPPORTED*, and a royal pain in the backside when not supported, to that regard Mellanox are a better buy and even Solarflare openload cards are solid but these are not supported as heavily as the intels and mellanoxes. But all of the aforementioned may be mute if pfSense doesnt fully support the cards offloading features, so again you would have to research that.

All in all you know what you're doing and only your budget is in charge of how convulsed a setup you desire, but BlueLineSwinger is dead right about applying the KISS discipline here, sometimes it is better to build it up as you go along rather than going all in and then banging your head for wasting your time and money on something you really didnt need as in overkill or if you just didnt get the research done up front and ended up buying a nic or two which had support limitations cough *chelsio*.

Bottom line, stick with server parts in my opinion, but there are many who run pfsense on desktop/thinclient parts, but when you look at their custom builds you will notice that they tend to use very efficient cpus teeny weeny little thinclient setups which tend to be routinely used in light industry too so have well known reliability records in that regard. Please note a typical desktop motherboard is often the weakest link, especially those where companies try to cram every feature into something at an attractive price, somethings going to give, usually quality.

Oh and if you do go the mini server route, you could potentially consider KVM and running pfsense as a virtual machine, that way you then get to have a few more VMs on there which you can play with to extend your pfSense further without being limited by the pfSense kernel. My 2 cents. :)
I used to have a mikrotik 10GbE switch but it started acting glitchy and I don't really know why and it could be my R7000 (also has wifi drop outs) or both so I figured instead of rebuying expensive prosumer garbage just invest that money into a proper system since the costs should be comparable since I can salvage an old system.

i bought 2 of these used off ebay that I used for my server and desktop. I got them for 40 bucks each used but saw some STH thread about counterfeit ones for the 4x1GbE I guess. They appeared to work fine before the switch crapped out.
Intel X520-DA1 single port SFP+ 10Gb Ethernet network card low profile

I am really looking for advice on what NICs I should get for the box too fit my needs. I use my server as a 24 bay Snapraid system so multiple rigs back up to it at once, and I use it as plex and more. Hence why 1GbE is not useful for server and my desktop.

Do you have any links and recommendations on various parts for me to research?

I recently bought a machine for pfSense. It doesn't have to be super expensive and can route at 990Mbps, which is as fast as I could ask for with the included 1Gb ports. The machine is an older server from ebay, I bought it mostly for the rack mount chassis, but the internals basically came for free and were good enough. An older setup, but still a decent machine with 24G ECC. I added a SSD I had sitting around for a boot drive, and it was up and running.

Supermicro 1026T-M3F 1U Server X8DTL-3F 1x E5620 24GB RAM w/ 1PSU & Rails | eBay

The CPU is a bit on the older side, but it has AES and is plenty fast. It did not include the drive caddies, I do wish people would include those. Seller took $75, and the shipping was worth it, they packed it extremely well.

IMO, older server gear with ECC is the way to go for this sort of thing. More reliable and designed to run 24/7. That said, I'm fortunate in not needing to care about noise, so the high speed tiny fans don't bother me. They live away from the people where the noise doesn't bother anyone.

I wouldn't use the machine as a switch. Get a switch and use VLANs if you want isolation. A nice 4x10Gbe + 48x1Gbe are available for about $100. Cheaper than all the NICs you are looking at. :)
can you give recommendations and links? See above for refence to the current single slot nics my Desktop and server use. I alsoo explain in greater detail.


this was the switch I was using that crapped out after 1 year. I also never fully got 10GbE even though all links said it was. My testing showed like 2-2.5Gbps for some reason. Not sure if that was a routing issue, NIC, switch, or OS (I use Win 7 Pro so I wouldn't be surprised if there was a single thread limitation within the OSes on the computers. I haven't tested any further because the mikrotik switch started dropping connections and getting weird IPs.)
https://smile.amazon.com/gp/product/B00KVF7S40/ref=oh_aui_search_asin_title?ie=UTF8&psc=1
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
For 1Gb, I'm all about intel NICs. Plentiful, cheap, compatible.

That doesn't seem to be as big a deal on 10Gb. I only have a pair of Mellanox ConnectX2. They work fine for my needs and iperf tests show them working at 9.9Gbps, good enough for zero tuning.

My switches are a LB4M and one of the Aurba switches discussed on the forum. No trouble with either.
 

Dreece

Active Member
Jan 22, 2019
503
161
43
If all you want is pfSense, then @ttabbal is on the money, stick with Intel, which particular model? I couldn't really say because to name one would mean to discount others, and I am not an avid pfSense user (I tinker with it, but I actually run my own custom linux firewall ie scripted iptables etc., and that's debian based linux not freebsd, freebsd is the underlying OS pfsense is built upon, they have their own drivers and kernels, it would be wise to hit the pfSense community with more direct questions about which nics are thoroughly tested and recommended)...

If you do want to go virtualised and want to run a few VMs, say pfSense / FreeNAS / Custom Linux builds.... then I'd consider throwing in a Mellanox too for offloading such as RDMA, however RDMA only works when both ends of the wire have a card supporting the same standard ie Roce or iWarp so you have to keep that in mind.

Switches wise comes down to wanting easy/energy-efficient/quiet deployment vs noisy/energy-hungry/learning-curve. Best thing to do is to just throw up a new post about switches for your specific needs and others will be happy to help you as many on here will have setups designed to serve similar needs as you.

Personally I don't believe I can propose my preferences to another, I'm always changing my tune and tend to float in the general direction of wanting to learn different platforms etc., for me it is more a hobby of knowledge-seeking as well as function, which kind of makes my selection in hardware very specific to my active and forthcoming projects rather than just to fit a collection of constant requirements. Virtualisation being a major player in the design of my network topology. What hardware I have today, may vanish tomorrow.

To summarise... Intel just works pretty much everywhere. Mellanox works when supported on said platform and also brings more advanced offloading features to the table which altogether means a lesser cpu can handle higher bandwidth or a better cpu can spare more cycles for smoother multitasking, highly beneficial in virtualised setups, obviously without offloading a single cpu can often max out which momentarily impacts other functions. Both intel and mellanox support RSS and the more cores you have in your processor the better when it comes to high bandwidth usage, because the load can be distributed across cores, just be sure to always mark the base core to be anything but the first and its associated hyperthread counterpart where applicable. Processor affinity also kicks in when you want multiple applications to run smoothly and reliably and again both Linux and Windows platforms give you the ability to finetune things to that effect, most just let the OS handle it all, the more picky of us like to macromanage the managers lol
 
Last edited: