All,
I have three Proxmox/Debian multi-homed servers. They each has (1) 1GBE, (2) 10GBE, and (2) 40Gb Infiniband adapters.
I have added (4) entries in the rt_table,
2 rt10G
3 rt10G02
4 rt40G01
5 rt40G02
I have set the routes in the /etc/network/interfaces for each adapter (only one shown here but each is set correctly with changes for each adapter)
auto vmbr1
iface vmbr1 inet static
address 172.20.10.15
netmask 255.255.255.0
bridge_ports eno1
bridge_stp off
bridge_fd 0
mtu 9000
post-up ip route add 172.20.10.0/24 dev vmbr1 src 172.20.10.15 table rt10G
post-up ip route add default via 172.20.10.1 dev vmbr1 table rt10G
post-up ip rule add from 172.20.10.15/24 table rt10G
post-up ip rule add to 172.20.10.15/24 table rt10G
#10Gbe Primary Network
This an example of what I am trying to do (don't get hung up on the different IP addresses as it is an example only):
The problem I am running into is that while normal network traffic "SEEMS" to be working correctly I cannot cross mount directories to each server.
On each server I have a storage array mounted at "/pve-ischeme/{servername}/data"
I export the same directory in the /etc/exports file on each server to each static route like below:.
/pve-ischeme/r720xd-02/data 172.20.1.0/24(rw,fsid=0,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.10.0/24(rw,fsid=1,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.11.0/24(rw,fsid=2,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.40.0/24(rw,fsid=3,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.41.0/24(rw,fsid=4,sync,no_root_squash,crossmnt,no_subtree_check)
In my /etc/fstab on a different server I try to mount the same directory from one server to (5) mount points on the other servers using the static routes for each like below"
172.20.10.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/data nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.10.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/10G01 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.11.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/10G02 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.41.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/40G01 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.42.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/40G02 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
The mount -a command doesn't throw any erros but when I navigate to the mounted folders, I can only access the default route mount. When I try to ls into any of the other directories it just hangs. I can ping across the static routes and vm/containers on each server seem to communicate properly. PFsense is showing traffic across the three Ethernet routes. Infiniband cannot be bridged so I was planning to do OS level direct mounts and leverage the direcotries at the OS level. NFS seems to not find anything not on the default 1GB route. SSH cannot connect when I use the -b option to force it on a specific static route.
My entire premise here is to have the storage in each server mounted to the other servers across multiple static routes so that I can share all the local storage across the cluster and segregate traffic across the (5) subnets to utilize their inherent speed. Having mounts as this would presumably allow me to attach the directories into my containers and vm's and reduce bottlenecks while allowing traffic to be isolated.
As an example, I want all traffic heading out to the wan to be on my 1GB Ethernet subnet. I want all my native inter-vm/container traffic on my two 10GB Ethernet subnets, lastly, I want all my backend data traffic between servers, vm's, and containers running on the two 40GB Infiniband subnets.
From everything I have been reading and testing, this should be doable. I just don't know why it isn't working. I would appreciate any help in understanding the following
1) can I segregate traffic over static routes and leverage the different speeds that comes with the various adapters?
1a) Have I done that correctly?
2) Can I share a directory from one server over multiple static routes?
2a) Have I done that correctly?
3) Can I mount a single directory from a remote server to multiple locations on a different server using different static routes?
4) Why does my routing from my VM's and containers seem to work but I cannot navigate the mounted folders across any route other than the default route?
5) Why when using SSH I cannot bind traffic across any route but the default route?
I know these are advances topics and while I have learned a lot on the internet, I still have more to learn on this subject. Again, ANY ASSISTANCE would be appreciated.
I have three Proxmox/Debian multi-homed servers. They each has (1) 1GBE, (2) 10GBE, and (2) 40Gb Infiniband adapters.
I have added (4) entries in the rt_table,
2 rt10G
3 rt10G02
4 rt40G01
5 rt40G02
I have set the routes in the /etc/network/interfaces for each adapter (only one shown here but each is set correctly with changes for each adapter)
auto vmbr1
iface vmbr1 inet static
address 172.20.10.15
netmask 255.255.255.0
bridge_ports eno1
bridge_stp off
bridge_fd 0
mtu 9000
post-up ip route add 172.20.10.0/24 dev vmbr1 src 172.20.10.15 table rt10G
post-up ip route add default via 172.20.10.1 dev vmbr1 table rt10G
post-up ip rule add from 172.20.10.15/24 table rt10G
post-up ip rule add to 172.20.10.15/24 table rt10G
#10Gbe Primary Network
This an example of what I am trying to do (don't get hung up on the different IP addresses as it is an example only):
The problem I am running into is that while normal network traffic "SEEMS" to be working correctly I cannot cross mount directories to each server.
On each server I have a storage array mounted at "/pve-ischeme/{servername}/data"
I export the same directory in the /etc/exports file on each server to each static route like below:.
/pve-ischeme/r720xd-02/data 172.20.1.0/24(rw,fsid=0,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.10.0/24(rw,fsid=1,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.11.0/24(rw,fsid=2,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.40.0/24(rw,fsid=3,sync,no_root_squash,crossmnt,no_subtree_check)
/pve-ischeme/r720xd-02/data 172.20.41.0/24(rw,fsid=4,sync,no_root_squash,crossmnt,no_subtree_check)
In my /etc/fstab on a different server I try to mount the same directory from one server to (5) mount points on the other servers using the static routes for each like below"
172.20.10.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/data nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.10.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/10G01 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.11.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/10G02 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.41.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/40G01 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
172.20.42.17:/pve-ischeme/r720xd-02/data /pve-ischeme/r720xd-02/40G02 nfs4 rw,soft,intr,rsize=8192,wsize=8192,timeo=600,retrans=5
The mount -a command doesn't throw any erros but when I navigate to the mounted folders, I can only access the default route mount. When I try to ls into any of the other directories it just hangs. I can ping across the static routes and vm/containers on each server seem to communicate properly. PFsense is showing traffic across the three Ethernet routes. Infiniband cannot be bridged so I was planning to do OS level direct mounts and leverage the direcotries at the OS level. NFS seems to not find anything not on the default 1GB route. SSH cannot connect when I use the -b option to force it on a specific static route.
My entire premise here is to have the storage in each server mounted to the other servers across multiple static routes so that I can share all the local storage across the cluster and segregate traffic across the (5) subnets to utilize their inherent speed. Having mounts as this would presumably allow me to attach the directories into my containers and vm's and reduce bottlenecks while allowing traffic to be isolated.
As an example, I want all traffic heading out to the wan to be on my 1GB Ethernet subnet. I want all my native inter-vm/container traffic on my two 10GB Ethernet subnets, lastly, I want all my backend data traffic between servers, vm's, and containers running on the two 40GB Infiniband subnets.
From everything I have been reading and testing, this should be doable. I just don't know why it isn't working. I would appreciate any help in understanding the following
1) can I segregate traffic over static routes and leverage the different speeds that comes with the various adapters?
1a) Have I done that correctly?
2) Can I share a directory from one server over multiple static routes?
2a) Have I done that correctly?
3) Can I mount a single directory from a remote server to multiple locations on a different server using different static routes?
4) Why does my routing from my VM's and containers seem to work but I cannot navigate the mounted folders across any route other than the default route?
5) Why when using SSH I cannot bind traffic across any route but the default route?
I know these are advances topics and while I have learned a lot on the internet, I still have more to learn on this subject. Again, ANY ASSISTANCE would be appreciated.