getting traffic via both 1 Gb and 10 Gb NICs with FreeNAS and ESXi setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

digity

Member
Jun 3, 2017
53
1
8
54
I set up a ESXi (6.7 u3) datastore on a FreeNAS NFS share. The connection is 10GbE via a 10GbE switch. When running CrystalDiskMark to benchmark VMs' disk performance, the reads are going through each server's gigabit NIC and the writes are going through each server's 10GbE NIC. The 10GbE NICs are on a separate subnet, I mounted the NFS share using the FreeNAS server's 10GbE IP address and on the ESXi side I created a separate port group, virtual switch and vmkernel NIC for the 10GbE connection yet reads go through the server's gigabit NIC.

Any clue why this is happening and how to resolve it?
 

pcmoore

Active Member
Apr 14, 2018
138
48
28
New England, USA
What do the routing tables look like? On the FreeNAS system you should be able to view the routing table with the following command:
# netstat -nr
... the same command will likely work on the ESX system, but just in case it doesn't have the legacy network tools installed you can also view the routing table on a modern Linux system with the `ip` command:
# ip route list
 

digity

Member
Jun 3, 2017
53
1
8
54
What do the routing tables look like? On the FreeNAS system you should be able to view the routing table with the following command:

... the same command will likely work on the ESX system, but just in case it doesn't have the legacy network tools installed you can also view the routing table on a modern Linux system with the `ip` command:
10.0.100.72 is the gigabit NIC on the ESXi host
10.0.100.111 is the gigabit NIC on the FreeNAS server
10.0.200.72 is the 10 GbE NIC on the ESXi host
10.0.200.111 is the 10 GbE NIC on the FreeNAS server

netstat -nr on FreeNAS:

Code:
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            10.0.100.1         UGS         em0
10.0.100.0/24      link#3             U           em0
10.0.100.111       link#3             UHS         lo0
10.0.200.0/24      link#1             U         cxgb0
10.0.200.111       link#1             UHS         lo0
10.0.201.0/24      link#2             U         cxgb1
10.0.201.111       link#2             UHS         lo0
10.0.202.0/24      link#6             U        mlxen0
10.0.202.111       link#6             UHS         lo0
127.0.0.1          link#5             UH          lo0
ip and netstat aren't availble on ESXi 6.7 u3, so I used " esxcli network ip connection list":

Code:
Proto  Recv Q  Send Q  Local Address    Foreign Address     State        World ID  CC Algo  World Name
-----  ------  ------  ---------------  ------------------  -----------  --------  -------  ---------------------
tcp         0       0  127.0.0.1:8307   127.0.0.1:35759     ESTABLISHED   2099221  newreno  hostd-IO
tcp         0     943  127.0.0.1:35759  127.0.0.1:8307      ESTABLISHED   2100082  newreno  rhttpproxy-work
tcp         0       0  127.0.0.1:80     127.0.0.1:18343     ESTABLISHED   2099210  newreno  rhttpproxy-IO
tcp         0       0  127.0.0.1:18343  127.0.0.1:80        ESTABLISHED   2117017  newreno  python
tcp         0       0  10.0.100.72:22   10.0.100.164:50318  ESTABLISHED   2098737  newreno  busybox
tcp         0       0  10.0.100.72:902  10.0.100.164:50269  ESTABLISHED   2098737  newreno  busybox
tcp         0       0  127.0.0.1:8307   127.0.0.1:24644     ESTABLISHED   2099221  newreno  hostd-IO
tcp         0       0  127.0.0.1:24644  127.0.0.1:8307      ESTABLISHED   2099717  newreno  vpxa-worker
tcp         0       0  127.0.0.1:8307   127.0.0.1:39698     ESTABLISHED   2099221  newreno  hostd-IO
tcp         0       0  127.0.0.1:39698  127.0.0.1:8307      ESTABLISHED   2099704  newreno  vpxa-worker
tcp         0       0  127.0.0.1:8089   127.0.0.1:10503     ESTABLISHED   2099701  newreno  vpxa-IO
tcp         0       0  127.0.0.1:10503  127.0.0.1:8089      ESTABLISHED   2100077  newreno  rhttpproxy-work
tcp         0       0  10.0.100.72:443  10.0.100.70:35146   ESTABLISHED   2099209  newreno  rhttpproxy-IO
tcp         0       0  127.0.0.1:8089   127.0.0.1:57258     ESTABLISHED   2099702  newreno  vpxa-IO
tcp         0       0  127.0.0.1:57258  127.0.0.1:8089      ESTABLISHED   2099214  newreno  rhttpproxy-work
tcp         0       0  10.0.100.72:443  10.0.100.70:34658   ESTABLISHED   2099210  newreno  rhttpproxy-IO
tcp         0       0  127.0.0.1:8089   127.0.0.1:57454     ESTABLISHED   2099702  newreno  vpxa-IO
tcp         0       0  127.0.0.1:57454  127.0.0.1:8089      ESTABLISHED   2100081  newreno  rhttpproxy-work
tcp         0       0  10.0.100.72:443  10.0.100.70:38062   ESTABLISHED   2099210  newreno  rhttpproxy-IO
tcp         0       0  127.0.0.1:8307   127.0.0.1:60445     ESTABLISHED   2099222  newreno  hostd-IO
tcp         0       0  127.0.0.1:60445  127.0.0.1:8307      ESTABLISHED   2100078  newreno  rhttpproxy-work
tcp         0       0  10.0.100.72:443  10.0.100.80:60571   ESTABLISHED   2099209  newreno  rhttpproxy-IO
tcp         0       0  10.0.100.72:902  10.0.100.130:63615  ESTABLISHED   2098737  newreno  busybox
tcp         0       0  127.0.0.1:8307   127.0.0.1:44148     ESTABLISHED   2099221  newreno  hostd-IO
tcp         0       0  127.0.0.1:44148  127.0.0.1:8307      ESTABLISHED   2100082  newreno  rhttpproxy-work
tcp         0       0  10.0.100.72:443  10.0.100.80:52835   ESTABLISHED   2099210  newreno  rhttpproxy-IO
tcp         0       0  0.0.0.0:8000     0.0.0.0:0           LISTEN        2098754  newreno  vmotionServer
tcp         0       0  0.0.0.0:8300     0.0.0.0:0           LISTEN        2098726  newreno  FTCptListener
tcp         0       0  10.0.100.72:958  10.0.200.111:2049   ESTABLISHED   2111562  newreno  vmm0:testo-win10-jhjh
tcp         0       0  10.0.100.72:22   10.0.100.130:62029  ESTABLISHED   2098737  newreno  busybox
tcp         0       0  127.0.0.1:12000  0.0.0.0:0           LISTEN        2099718  newreno  vpxa-worker
tcp         0       0  127.0.0.1:8307   127.0.0.1:11216     ESTABLISHED   2099221  newreno  hostd-IO
tcp         0       0  127.0.0.1:11216  127.0.0.1:8307      ESTABLISHED   2099899  newreno  vpxa-worker
tcp         0       0  127.0.0.1:8307   127.0.0.1:15647     CLOSED        2099221  newreno  hostd-IO
tcp         0       0  127.0.0.1:12001  0.0.0.0:0           LISTEN        2099183  newreno  hostd
tcp         0       0  127.0.0.1:8307   0.0.0.0:0           LISTEN        2099183  newreno  hostd
tcp         0       0  127.0.0.1:8309   0.0.0.0:0           LISTEN        2099183  newreno  hostd
tcp         0       0  127.0.0.1:8089   0.0.0.0:0           LISTEN        2099686  newreno  vpxa
tcp         0       0  10.0.200.72:427  0.0.0.0:0           LISTEN        2099510  newreno
tcp         0       0  10.0.100.72:427  0.0.0.0:0           LISTEN        2099510  newreno
tcp         0       0  127.0.0.1:427    0.0.0.0:0           LISTEN        2099510  newreno
tcp         0       0  127.0.0.1:549    0.0.0.0:0           LISTEN        2099203  newreno  rhttpproxy
tcp         0       0  0.0.0.0:443      0.0.0.0:0           LISTEN        2099203  newreno  rhttpproxy
tcp         0       0  0.0.0.0:80       0.0.0.0:0           LISTEN        2099203  newreno  rhttpproxy
tcp         0       0  127.0.0.1:8303   0.0.0.0:0           LISTEN        2099131  newreno  hostdCgiServer
tcp         0       0  0.0.0.0:9080     0.0.0.0:0           LISTEN        2098962  newreno  ioFilterVPServer
tcp         0       0  0.0.0.0:22       0.0.0.0:0           LISTEN        2098737  newreno  busybox
tcp         0       0  0.0.0.0:902      0.0.0.0:0           LISTEN        2098737  newreno  busybox
udp         0       0  0.0.0.0:427      0.0.0.0:0                         2099510
udp         0       0  0.0.0.0:8301     0.0.0.0:0                         2097743
udp         0       0  0.0.0.0:8302     0.0.0.0:0                         2097743
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Sounds like you might need to set the network path selection policy in vmware to fixed instead of a roundrobin/recently used.
 

digity

Member
Jun 3, 2017
53
1
8
54
Sounds like you might need to set the network path selection policy in vmware to fixed instead of a roundrobin/recently used.
I can't find any where to change path policies plus I don't think I'm using multiple paths (it's NFS).
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Ah, NFS can do multipath too, but you have to specify two target points which you likely didn't do I assume.
You setup the nfs share to be 10.0.200.111:/share correct? or did you use a dns name?
I'm assuming the 10.0.100.0 can access the 10.0.200.0? Does it need to? You could segregate your 10g storage traffic on the 10.0.200 vlan so it has no choice but to read/write over the 10g connection (if nothing else to test).
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Doesn't the NFS multipath only work for NFS 4.1?
4.1 is the only version that supports it natively, the version wasn't specified and 6.7 defaults to 4.1 so ¯\_(ツ)_/¯.
With 3.1 you can use the network stack to configure multiple connections to the storage targets technically.
 
  • Like
Reactions: nikalai