Hello. I'm having issues with remote user being able to access the server (.203) when connecting over VPN. The server is constantly connected to a 3rd party VPN provider, used for all external access. This interface is the default route (0.0.0.0 tun1).
Internal clients, A and B, can access the sever without issue. A & B do not use any VPN; their default route is to the router and out to ISP.
When a remote user connects to the router via VPN, it acquires an IP in the same subnet as everything else. The idea was that if it had the same subnet-IP, routing would be simple. But, the remote user cannot ping/ssh/smb to the server. Remote user can ping/ssh to A & B, but not to server.
I feel this has something to do with routing priority on the server but I'm not sure how to handle that. The server already has a route for the LAN but for whatever reason, it's not used by remote client's attempts. I don't know how to confirm this, but my suspicion is that packets from remote user reach server, but the response packets are going out via tun1 which is not correct; they should go out eth0.
What sort of network config do I need so that remote user can access the server when connected to the LAN via VPN?