Hi,
So I'm trying to set up a new OmniOS 151036 VM on a new ESXi host.
I'm using vmxnet3 driver from solaris 10.3.10 iso right now, but was using open-vm-tools previously. Not seeing much difference between the two, I only switched to see what would happen since I'm seeing these slow incoming iperf3 speeds.
I have processor affinity set for last 4 cores of VM host, no other VMs using these cores (all have their own affinity set for different cores).
Iperf3 score OUT:
Iperf3 IN:
As the same noticed in a post I made back in 2018, /kernel/drv/vmxnet3s.conf is different between open-vm-tools and vmware-tools, so I tried increasing my Rx and Tx ring sizes, and my RfBufPoolLimit:
Here's my vmxnet3s.conf:
That appears to have made iperf3 incoming scores a little worse:
I think I may have gone a bit overboard on the RxBufPoolLimit. I'm going to try reducing RxBufPoolLimit to 16384 so it's the same as open-vm-tools' vmxnet3s.conf and see if that improves the situation...
Also, I'm noticing this:
Looks like there's a lot of wiggle-room there... is the set-prop command persistent?
Also, I tried restarting the svc:/network/interface:default service and /etc/init.d/vmware-tools, but it didn't appear to change the settings (vmxnet3 settings echoed to console upon boot, but not after svc restarts), so I've been rebooting the machine between changes to vmxnet3s.conf - if that is incorrect (restarting services DOES initiate new settings), please let me know.
If anyone has any suggestions, I'd appreciate it. Thanks!
EDIT: I just noticed something interesting - my tcp send and recv_buf are quite a bit different on my established OmniOS fileserver I started using back in 2018 from the OmniOS 151024ce napp-it.ova:
I'm going to try setting these same values on the new OmniOS VM and see what happens...
So I'm trying to set up a new OmniOS 151036 VM on a new ESXi host.
I'm using vmxnet3 driver from solaris 10.3.10 iso right now, but was using open-vm-tools previously. Not seeing much difference between the two, I only switched to see what would happen since I'm seeing these slow incoming iperf3 speeds.
I have processor affinity set for last 4 cores of VM host, no other VMs using these cores (all have their own affinity set for different cores).
Iperf3 score OUT:
Code:
[root@hedgehoggrifter:/kernel/drv] $ iperf3 -c 192.168.1.38
Connecting to host 192.168.1.38, port 5201
[ 4] local 192.168.1.52 port 56504 connected to 192.168.1.38 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.02 GBytes 8.79 Gbits/sec
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 4] 2.00-3.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 4] 3.00-4.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 4.00-5.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 4] 5.00-6.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 4] 6.00-7.00 sec 1.09 GBytes 9.41 Gbits/sec
[ 4] 7.00-8.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 4] 8.00-9.00 sec 1.06 GBytes 9.08 Gbits/sec
[ 4] 9.00-10.00 sec 1.09 GBytes 9.38 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 10.8 GBytes 9.29 Gbits/sec sender
[ 4] 0.00-10.00 sec 10.8 GBytes 9.29 Gbits/sec receiver
Code:
[root@hedgehoggrifter:/kernel/drv $ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.38, port 49416
[ 5] local 192.168.1.52 port 5201 connected to 192.168.1.38 port 49418
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 530 MBytes 4.45 Gbits/sec
[ 5] 1.00-2.00 sec 413 MBytes 3.46 Gbits/sec
[ 5] 2.00-3.00 sec 398 MBytes 3.33 Gbits/sec
[ 5] 3.00-4.00 sec 401 MBytes 3.36 Gbits/sec
[ 5] 4.00-5.00 sec 399 MBytes 3.35 Gbits/sec
[ 5] 5.00-6.00 sec 395 MBytes 3.31 Gbits/sec
[ 5] 6.00-7.00 sec 396 MBytes 3.32 Gbits/sec
[ 5] 7.00-8.00 sec 394 MBytes 3.31 Gbits/sec
[ 5] 8.00-9.00 sec 394 MBytes 3.31 Gbits/sec
[ 5] 9.00-10.00 sec 392 MBytes 3.29 Gbits/sec
[ 5] 10.00-10.01 sec 89.1 KBytes 119 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.01 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.01 sec 4.02 GBytes 3.45 Gbits/sec receiver
Here's my vmxnet3s.conf:
Code:
[avery@hedgehoggrifter:/kernel/drv] $ cat vmxnet3s.conf
# Driver.conf(4) file for VMware Vmxnet Generation 3 adapters.
# TxRingSize --
#
# Tx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
#
# Minimum value: 32
# Maximum value: 4096
#
TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
# RxRingSize --
#
# Rx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
#
# Minimum value: 32
# Maximum value: 4096
#
RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
# RxBufPoolLimit --
#
# Limit the number of Rx buffers cached for each vmxnet3s# adapter.
# Increasing the limit might improve performance but increases the
# memory footprint.
#
# Minimum value: 0
# Maximum value: RxRingSize * 10
#
RxBufPoolLimit=40960,40960,40960,40960,40960,40960,40960,40960,40960,40960;
# EnableLSO --
#
# Enable or disable LSO for each vmxnet3s# adapter.
#
# Minimum value: 0
# Maximum value: 1
#
EnableLSO=1,1,1,1,1,1,1,1,1,1;
# MTU --
#
# Set MTU for each vmxnet3s# adapter.
#
# Minimum value: 60
# Maximum value: 9000
#
#MTU=1500,1500,1500,1500,1500,1500,1500,1500,1500,1500;
Code:
[avery@hedgehoggrifter:~] $ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.38, port 49648
[ 5] local 192.168.1.52 port 5201 connected to 192.168.1.38 port 49650
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 535 MBytes 4.49 Gbits/sec
[ 5] 1.00-2.00 sec 358 MBytes 3.00 Gbits/sec
[ 5] 2.00-3.00 sec 383 MBytes 3.21 Gbits/sec
[ 5] 3.00-4.00 sec 378 MBytes 3.17 Gbits/sec
[ 5] 4.00-5.00 sec 376 MBytes 3.16 Gbits/sec
[ 5] 5.00-6.00 sec 384 MBytes 3.22 Gbits/sec
[ 5] 6.00-7.00 sec 391 MBytes 3.28 Gbits/sec
[ 5] 7.00-8.00 sec 376 MBytes 3.15 Gbits/sec
[ 5] 8.00-9.00 sec 378 MBytes 3.17 Gbits/sec
[ 5] 9.00-10.00 sec 383 MBytes 3.21 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.00 sec 3.85 GBytes 3.31 Gbits/sec receiver
Also, I'm noticing this:
Code:
[avery@hedgehoggrifter:/kernel/drv] $ ipadm show-prop | egrep -i 'send|recv'
icmp recv_buf rw 8192 -- 8192 4096-262144
icmp send_buf rw 8192 -- 8192 4096-262144
tcp recv_buf rw 128000 -- 128000 2048-1048576
tcp send_buf rw 49152 -- 49152 4096-1048576
udp recv_buf rw 57344 -- 57344 128-2097152
udp send_buf rw 57344 -- 57344 1024-2097152
sctp recv_buf rw 102400 -- 102400 8192-1048576
sctp send_buf rw 102400 -- 102400 8192-1048576
Also, I tried restarting the svc:/network/interface:default service and /etc/init.d/vmware-tools, but it didn't appear to change the settings (vmxnet3 settings echoed to console upon boot, but not after svc restarts), so I've been rebooting the machine between changes to vmxnet3s.conf - if that is incorrect (restarting services DOES initiate new settings), please let me know.
If anyone has any suggestions, I'd appreciate it. Thanks!
EDIT: I just noticed something interesting - my tcp send and recv_buf are quite a bit different on my established OmniOS fileserver I started using back in 2018 from the OmniOS 151024ce napp-it.ova:
Code:
avery@napp-it01:~$ ipadm show-prop | egrep -i 'send|recv'
icmp recv_buf rw 8192 -- 8192 4096-262144
icmp send_buf rw 8192 -- 8192 4096-262144
tcp recv_buf rw 2097152 2097152 128000 2048-16777216
tcp send_buf rw 2097152 2097152 49152 4096-16777216
udp recv_buf rw 57344 -- 57344 128-2097152
udp send_buf rw 57344 -- 57344 1024-2097152
sctp recv_buf rw 102400 -- 102400 8192-1048576
sctp send_buf rw 102400 -- 102400 8192-1048576
Last edited: