Need help with pfSense Site-Site OpenVPN configuration issue.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Background: I'm planning to put a small AIO box with 4 drives at an offisite at my daughter's place. I started configuring the unit at home with ESXi 6.7, VM for pfsense, napp-it VM and a small ubuntu VM for testing etc. Followed the steps on netgate's docs as well as other guides. Got it to work somewhat but has issues passing traffic in.

Primary: 10.16.32.0/24
Tunnel: 10.10.9.0/24 ( this could be 30 but not sure it matters, just using 2 IPs )
Secondary: 10.10.10.0/24 (behind pfSense)

iperf3 traffic one way (Secondary -> Primary) appears to be fine but (Primary -> Secondary) is where I'm having issue. See the output below.


Code:
$ iperf3 --format m --reverse -c 10.10.10.51
Connecting to host 10.10.10.51, port 5201
Reverse mode, remote host 10.10.10.51 is sending
[  4] local 10.16.32.195 port 61528 connected to 10.10.10.51 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   108 MBytes   907 Mbits/sec               
[  4]   1.00-2.00   sec   105 MBytes   880 Mbits/sec               
[  4]   2.00-3.00   sec   109 MBytes   915 Mbits/sec               
[  4]   3.00-4.00   sec   111 MBytes   930 Mbits/sec               
[  4]   4.00-5.00   sec   110 MBytes   925 Mbits/sec               
[  4]   5.00-6.00   sec   111 MBytes   930 Mbits/sec               
[  4]   6.00-7.00   sec   110 MBytes   926 Mbits/sec               
[  4]   7.00-8.00   sec   110 MBytes   925 Mbits/sec               
[  4]   8.00-9.00   sec   110 MBytes   924 Mbits/sec               
[  4]   9.00-10.00  sec   111 MBytes   927 Mbits/sec               
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.07 GBytes   921 Mbits/sec  1335             sender
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec                  receiver

iperf Done.

$ iperf3 --format m -c 10.10.10.51
Connecting to host 10.10.10.51, port 5201
[  4] local 10.16.32.195 port 61532 connected to 10.10.10.51 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   201 KBytes  1.64 Mbits/sec    2   1.41 KBytes     
[  4]   1.00-2.00   sec  0.00 Bytes  0.00 Mbits/sec    1   1.41 KBytes     
[  4]   2.00-3.00   sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
[  4]   3.00-4.00   sec  0.00 Bytes  0.00 Mbits/sec    1   1.41 KBytes     
[  4]   4.00-5.00   sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
[  4]   5.00-6.00   sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
[  4]   6.00-7.00   sec  0.00 Bytes  0.00 Mbits/sec    1   1.41 KBytes     
[  4]   7.00-8.00   sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
[  4]   8.00-9.00   sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
[  4]   9.00-10.00  sec  0.00 Bytes  0.00 Mbits/sec    0   1.41 KBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   201 KBytes  0.16 Mbits/sec    5             sender
[  4]   0.00-10.00  sec  65.0 KBytes  0.05 Mbits/sec                  receiver

iperf Done.


Traffic between the test VMs behind Secondary pfSense is fine as you would expect. SSH connections into the VMs from the primary drops even when I'm actively using it. I also ran iperf between both pfSesnce and got similar results

Rich (BB code):
~~

Server PFS01 -> Client PFS02

iperf 3.7
FreeBSD pfs01-t610.localdomain 11.3-STABLE FreeBSD 11.3-STABLE #243 abf8cba50ce(RELENG_2_4_5): Tue Jun  2 17:53:37 EDT 2020     root@buildbot1-nyi.netgate.com:/build/ce-crossbuild-245/obj/amd64/YNx4Qq3j/build/ce-crossbuild-245/sources/FreeBSD-src/sys/pfSense amd64
Control connection MSS 1336
Time: Tue, 03 Aug 2021 18:57:55 UTC
Connecting to host 10.10.10.1, port 5201
      Cookie: ln7tkyit5jvpgwkil53hu34dvkfjytvn4sc6
      TCP MSS: 1336 (default)
[  5] local 10.10.9.1 port 32824 connected to 10.10.10.1 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  12.9 MBytes   109 Mbits/sec    8    114 KBytes      
[  5]   1.00-2.00   sec  12.5 MBytes   105 Mbits/sec    1    153 KBytes      
[  5]   2.00-3.00   sec  12.4 MBytes   104 Mbits/sec    6   99.7 KBytes      
[  5]   3.00-4.00   sec  12.5 MBytes   105 Mbits/sec    2    141 KBytes      
[  5]   4.00-5.00   sec  12.5 MBytes   104 Mbits/sec    1    174 KBytes      
[  5]   5.00-6.00   sec  12.4 MBytes   104 Mbits/sec    4    135 KBytes      
[  5]   6.00-7.00   sec  12.5 MBytes   105 Mbits/sec    1    168 KBytes      
[  5]   7.00-8.00   sec  12.4 MBytes   104 Mbits/sec    2    123 KBytes      
[  5]   8.00-9.00   sec  12.5 MBytes   105 Mbits/sec    1    160 KBytes      
[  5]   9.00-10.00  sec  12.5 MBytes   105 Mbits/sec    3    109 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   125 MBytes   105 Mbits/sec   29             sender
[  5]   0.00-10.30  sec   125 MBytes   102 Mbits/sec                  receiver
CPU Utilization: local/sender 14.3% (2.5%u/11.9%s), remote/receiver 1.5% (0.5%u/1.0%s)
snd_tcp_congestion newreno
rcv_tcp_congestion newreno


~~~



Client PFS02 -> Server PFS01

iperf 3.9
FreeBSD pfs02.localdomain 12.2-STABLE FreeBSD 12.2-STABLE 1b709158e581(RELENG_2_5_0) pfSense amd64
Control connection MSS 1460
Time: Tue, 03 Aug 2021 19:46:24 UTC
Connecting to host 10.16.32.1, port 5201
      Cookie: l7fwtraga7wixu3ktdr4jzfmlpaeqgr5avvv
      TCP MSS: 1460 (default)
[  5] local 10.16.32.197 port 17932 connected to 10.16.32.1 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  86.1 MBytes   722 Mbits/sec    0    111 KBytes      
[  5]   1.00-2.00   sec  85.5 MBytes   717 Mbits/sec    0    111 KBytes      
[  5]   2.00-3.00   sec  84.9 MBytes   712 Mbits/sec    0    111 KBytes      
[  5]   3.00-4.00   sec  84.5 MBytes   709 Mbits/sec    0    111 KBytes      
[  5]   4.00-5.00   sec  84.4 MBytes   708 Mbits/sec    0    111 KBytes      
[  5]   5.00-6.00   sec  84.9 MBytes   712 Mbits/sec    0    111 KBytes      
[  5]   6.00-7.00   sec  84.8 MBytes   712 Mbits/sec    0    111 KBytes      
[  5]   7.00-8.00   sec  83.7 MBytes   702 Mbits/sec    0    113 KBytes      
[  5]   8.00-9.00   sec  82.7 MBytes   694 Mbits/sec    0    113 KBytes      
[  5]   9.00-10.00  sec  84.7 MBytes   710 Mbits/sec    0    113 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   846 MBytes   710 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   846 MBytes   710 Mbits/sec                  receiver
CPU Utilization: local/sender 9.2% (2.3%u/7.0%s), remote/receiver 2.8% (0.4%u/2.4%s)
snd_tcp_congestion newreno
rcv_tcp_congestion newreno

iperf Done.

~~~

I think the ESXi network config is correct.





I do have concern about the gateway (pending) pfSense on Secondary.

1628031584978.png

1628031839629.png



Primary seem to be ok.

1628031678987.png

1628031752337.png
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
@RTM - Thanks for the pointer. That did not resolve the issue. I disabled HW Checksum Offload and also changed in the virtual NICs to e1000 on the virtual instance. Still traffic is primarily oneway. Weird thing is it appears to let small packets into the Secondary. Ping to both directions works fine.

1628110912660.png
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
@RTM - Thanks for the pointer. That did not resolve the issue. I disabled HW Checksum Offload and also changed in the virtual NICs to e1000 on the virtual instance. Still traffic is primarily oneway. Weird thing is it appears to let small packets into the Secondary. Ping to both directions works fine.

In my experience the typical result of having offloads enabled is that bigger packets are not passed but only smaller, so it sure sounds like it is the reason. It might be worth looking into whether there are more offload options to disable.

Since you are saying that small packets are passing, perhaps its a issue with packet fragmentation, perhaps the "do not fragment" flag is set for some reason. You could try lowering the MTU to something like 1300. Here is a thread on the Netgate forum that discusses how to do this (I think - I just skimmed over it):

Also there are a few guides on their websites like this one, perhaps there is something in it that can help?
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Also there are a few guides on their websites like this one, perhaps there is something in it that can help?
Thanks, I'll review the relevant ones to try and report back. Just for the grins, I added another NIC to the nappit VM in the WAN port group and did the initial snapshot over bypassing the pfSense / OpenVPN S2S. It worked fine. Now I need to get the same working via pfSense.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
Oh and you may want to install the open vm tools, though I doubt it will solve this particular problem it is best practice.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Open VM tools were already installed in the pfSense VM. pfSense version is the stable 2.5.x on both sides. Reduced the MSSFIX to1300 . I've tried to troubleshoot on and off without resolution. Checked the config step by step on the Netgate forum / recipes as mentioned above. Tunnel is and ping works both ways. iperf3 still exhibits the same behavior as my OP. Just for grins, added a vNIC in WAN port group on the nappit VM and uploads work as expected (bypassing the openVPN S2S) Disable the interface and uploads fail passing thru the openVPN S2S. Not sure what other things to check.