Hi,
I am setting up a test scenario for Chelsio vs MLX Nic (RDMA) tests.
I have a ESXi7 box with a MLX and a Chelsio adapter, assigned to a dvswitch with 4 portgroups (VLANs 10/11 {Uplink1/2}, 20/21 {Uplink3/4}).
I have mounted a NFS share on a TNC 12 box also with a Chelsio and MLX adapter, providing a couple of NFS shares on 4 IPs ( vlan 10/11 for mlx, vlan 20/21 for the chelsio).
I then mounted the nfs shares as follows:
Theoretically that should bind each NFS based datastore to the vmks in that ip range which is bound to a portgroup with the vlan which is bound to an uplink which is implemented by either the Chelsio or MLX card.
I then moved a testvm on that datastore and ran fio on it.
Now what happens is that on whichever datastore I run a fio test I see that all 4 NIC ports on the TNC port receive traffic and i dont understand why.
My only idea so far is that it might have to do with the Load Balancing option in the dvswitch but that *should* honor unused nics (theoretically)...
I tried various options and at this point at this one, but neither of them helped at all.
I am confused to say the least
I'll try and deny access to the MLX sharrres to chelsio ips in TNC (and vice versa) but it should work without that patch imho...
I am setting up a test scenario for Chelsio vs MLX Nic (RDMA) tests.
I have a ESXi7 box with a MLX and a Chelsio adapter, assigned to a dvswitch with 4 portgroups (VLANs 10/11 {Uplink1/2}, 20/21 {Uplink3/4}).
I have mounted a NFS share on a TNC 12 box also with a Chelsio and MLX adapter, providing a couple of NFS shares on 4 IPs ( vlan 10/11 for mlx, vlan 20/21 for the chelsio).
I then mounted the nfs shares as follows:
esxcli storage nfs41 add -H 10.0.10.49,10.0.11.49 -s /mnt/sas/fn12_mlx_async_64k -v fn12_mlx_async_64k
esxcli storage nfs41 add -H 10.0.10.49,10.0.11.49 -s /mnt/sas/fn12_mlx_sync_64k -v fn12_mlx_sync_64k
esxcli storage nfs41 add -H 10.0.20.49,10.0.21.49 -s /mnt/sas/fn12_chelsio_async_64k -v fn12_chelsio_async_64k
esxcli storage nfs41 add -H 10.0.20.49,10.0.21.49 -s /mnt/sas/fn12_chelsio_sync_64k -v fn12_chelsio_sync_64k
Theoretically that should bind each NFS based datastore to the vmks in that ip range which is bound to a portgroup with the vlan which is bound to an uplink which is implemented by either the Chelsio or MLX card.
I then moved a testvm on that datastore and ran fio on it.
Now what happens is that on whichever datastore I run a fio test I see that all 4 NIC ports on the TNC port receive traffic and i dont understand why.
My only idea so far is that it might have to do with the Load Balancing option in the dvswitch but that *should* honor unused nics (theoretically)...
I tried various options and at this point at this one, but neither of them helped at all.
I am confused to say the least
I'll try and deny access to the MLX sharrres to chelsio ips in TNC (and vice versa) but it should work without that patch imho...
Last edited: