Striped SSDs for pfSense?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

CJRoss

Member
May 31, 2017
91
6
8
I'm building a new AES-NI capable machine for pfSense. Now that it supports ZFS I'm looking at running a pair of SSDs striped together. Current contender is 860 EVO 250gb SATA version.

I'm not concerned about data loss as I'll have the config backed up and just reinstall. I plan on running squid, snort and other packages. This is for a gigabit connection with 10G internal networks.

Is this overkill? Should I use a different SSD? Not stripe?

Thanks.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I'm building a new AES-NI capable machine for pfSense. Now that it supports ZFS I'm looking at running a pair of SSDs striped together. Current contender is 860 EVO 250gb SATA version.

I'm not concerned about data loss as I'll have the config backed up and just reinstall. I plan on running squid, snort and other packages. This is for a gigabit connection with 10G internal networks.

Is this overkill? Should I use a different SSD? Not stripe?

Thanks.
I'd say it's overkill in the sense that you'd be very unlikely to notice and performance gain over using just a single (or mirrored) SSD(s).
 
  • Like
Reactions: Patrick

Nizmo

Member
Jan 24, 2018
101
17
18
37
I'd say it's overkill in the sense that you'd be very unlikely to notice and performance gain over using just a single (or mirrored) SSD(s).
My PfSense see's about 10TB per week of Incoming and Outgoing Traffic, be aware of your bandwidth usage and drive physical longevity.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
My PfSense see's about 10TB per week of Incoming and Outgoing Traffic, be aware of your bandwidth usage and drive physical longevity.
I'm not sure if this was directed at me or the OP.

My pfSense sees close to that as well with my Gigabit Up and Down connection and many packages running. I see no performance issues on mirrored SSDs.
 

Nizmo

Member
Jan 24, 2018
101
17
18
37
Sory was for the OP, If I used my DC S3510 SSD'sthat traffic would eat the drives in short time. Versus the DC P3608's I use which have higher resiliency before failure.

Personally I wouldn't use consumer drives for PfSense.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Sory was for the OP, If I used my DC S3510 SSD'sthat traffic would eat the drives in short time. Versus the DC P3608's I use which have higher resiliency before failure.

Personally I wouldn't use consumer drives for PfSense.
Good call out. I'm using higher write endurance drives as well.

So to the OP, Evo's are probably not a good bet if you're going to be monitoring a lot of traffic.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Mirrored I can see a benefit of. Not striped.

On your 10TB incoming and outgoing that's not all being written to disk right?
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
what are you hoping to achieve? understand that squid is more likely to slow things down than speed them up.

high endurance drives are a waste of money for this application.
 
Last edited:
  • Like
Reactions: lowfat

Nizmo

Member
Jan 24, 2018
101
17
18
37
Are you caching everything in and out?

Normally all the forwarding is done in cache and RAM since disk is too slow.
In my case, the P3608's are substantially faster than non-PCIe disk's (all of them) so I use them for PfSense. Using other drives (SATA/SAS) would slow things down considerably. The cache is the DC P3608's.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
But outbound traffic cached too????

The inbound cache makes sense but for normal pfsense you won't need that much endurance
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
The inbound cache makes sense
Inbound cache makes basically no sense on normal traffic in 2018. What isn't encrypted is usually uncacheable for other reasons (dynamic, etc) and the added latency hurts a lot more than the limited hit rate helps.
 

Nizmo

Member
Jan 24, 2018
101
17
18
37
Specifically, Raid 6 DC S3510 Server 2016 Host, DC P3608 VM Storage.

Nothing specifically set up for cache on PfSense (no squid, etc.) other than Intel built-in features of the DC drives (which need run RSTe software to work).

NVMe is the way to go if I had to nail it to a single sentence.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
Specifically, Raid 6 DC S3510 Server 2016 Host, DC P3608 VM Storage.

Nothing specifically set up for cache on PfSense (no squid, etc.) other than Intel built-in features of the DC drives (which need run RSTe software to work).

NVMe is the way to go if I had to nail it to a single sentence.
If you didn't set up a cache, why do you think you're writing all your network traffic to disk? A SATA SSD will help pfsense boot faster, after that the disk is basically idle.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
In my case, the P3608's are substantially faster than non-PCIe disk's (all of them) so I use them for PfSense. Using other drives (SATA/SAS) would slow things down considerably. The cache is the DC P3608's.
There's no doubt P3608 and NVME in general are substantially faster than SATA I still find it hard to believe you actually need a P3608 for PFSENSE. This sounds like a case of throw the fastest at it because it's faster, then argue on principal of drive performance vs. drive not application real-life utilization performance :(

Have you tested with a normal Intel Enterprise SATA SSD? or assuming they won't work for you?

People run a lot more than a basic pfsense with sata on 1gig/1gig with 0 issues.
 

Nizmo

Member
Jan 24, 2018
101
17
18
37
There's no doubt P3608 and NVME in general are substantially faster than SATA I still find it hard to believe you actually need a P3608 for PFSENSE.
If you didn't set up a cache, why do you think you're writing all your network traffic to disk?
Sure, SSD is improvement over HDD/SAS. The traffic is absolutely going to disk. (1) of the culprits are public speedtest server. WAN to LAN, no way around it. I can limit (who) gets to speedtest but... thats another thread :)

NVMe on 10Gb+ yield good results. A single SFP+ or bonding SFP+ connections can saturate SATA/SAS bandwidth hence needs for RAID or NVMe for performance reasons.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
So you're serving data from pfSense for speedtest? The NVMe is not for passing traffic from LAN/ WAN. Is that right?

I just checked. We have a pfSense that's run 100gbps constantly for 2 years and the SSD has under 50GB written. I think most of that was when we were writing logs directly to the boot media instead of another server and when we did a few re-installs the first time we setup the system.

I can see writing a lot to some sort of caching server. I can see it if you're caching or storing data on the pfSense appliance but I'm still puzzled at writing that much to disk. Even with NVMe it's too slow to write all of the packet data to disk right?
 

Nizmo

Member
Jan 24, 2018
101
17
18
37
So you're serving data from pfSense for speedtest? The NVMe is not for passing traffic from LAN/ WAN. Is that right?

I just checked. We have a pfSense that's run 100gbps constantly for 2 years and the SSD has under 50GB written. I think most of that was when we were writing logs directly to the boot media instead of another server and when we did a few re-installs the first time we setup the system.

I can see writing a lot to some sort of caching server. I can see it if you're caching or storing data on the pfSense appliance but I'm still puzzled at writing that much to disk. Even with NVMe it's too slow to write all of the packet data to disk right?
In one example, data (WAN) comes in, and passes through PfSense to internal VM(s) on LAN writing to disk. Performance is key so NVMe is utilized.