Strange iperf3 speeds - incoming about 1/2 as fast?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Hi,

So I'm trying to set up a new OmniOS 151036 VM on a new ESXi host.

I'm using vmxnet3 driver from solaris 10.3.10 iso right now, but was using open-vm-tools previously. Not seeing much difference between the two, I only switched to see what would happen since I'm seeing these slow incoming iperf3 speeds.

I have processor affinity set for last 4 cores of VM host, no other VMs using these cores (all have their own affinity set for different cores).

Iperf3 score OUT:

Code:
[root@hedgehoggrifter:/kernel/drv] $ iperf3 -c 192.168.1.38
Connecting to host 192.168.1.38, port 5201
[  4] local 192.168.1.52 port 56504 connected to 192.168.1.38 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.02 GBytes  8.79 Gbits/sec               
[  4]   1.00-2.00   sec  1.10 GBytes  9.41 Gbits/sec               
[  4]   2.00-3.00   sec  1.09 GBytes  9.36 Gbits/sec               
[  4]   3.00-4.00   sec  1.09 GBytes  9.38 Gbits/sec               
[  4]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec               
[  4]   5.00-6.00   sec  1.09 GBytes  9.34 Gbits/sec               
[  4]   6.00-7.00   sec  1.09 GBytes  9.41 Gbits/sec               
[  4]   7.00-8.00   sec  1.09 GBytes  9.34 Gbits/sec               
[  4]   8.00-9.00   sec  1.06 GBytes  9.08 Gbits/sec               
[  4]   9.00-10.00  sec  1.09 GBytes  9.38 Gbits/sec               
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  10.8 GBytes  9.29 Gbits/sec                  sender
[  4]   0.00-10.00  sec  10.8 GBytes  9.29 Gbits/sec                  receiver
Iperf3 IN:

Code:
[root@hedgehoggrifter:/kernel/drv $ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.38, port 49416
[  5] local 192.168.1.52 port 5201 connected to 192.168.1.38 port 49418
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   530 MBytes  4.45 Gbits/sec               
[  5]   1.00-2.00   sec   413 MBytes  3.46 Gbits/sec               
[  5]   2.00-3.00   sec   398 MBytes  3.33 Gbits/sec               
[  5]   3.00-4.00   sec   401 MBytes  3.36 Gbits/sec               
[  5]   4.00-5.00   sec   399 MBytes  3.35 Gbits/sec               
[  5]   5.00-6.00   sec   395 MBytes  3.31 Gbits/sec               
[  5]   6.00-7.00   sec   396 MBytes  3.32 Gbits/sec               
[  5]   7.00-8.00   sec   394 MBytes  3.31 Gbits/sec               
[  5]   8.00-9.00   sec   394 MBytes  3.31 Gbits/sec               
[  5]   9.00-10.00  sec   392 MBytes  3.29 Gbits/sec               
[  5]  10.00-10.01  sec  89.1 KBytes   119 Mbits/sec               
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.01  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.01  sec  4.02 GBytes  3.45 Gbits/sec                  receiver
As the same noticed in a post I made back in 2018, /kernel/drv/vmxnet3s.conf is different between open-vm-tools and vmware-tools, so I tried increasing my Rx and Tx ring sizes, and my RfBufPoolLimit:

Here's my vmxnet3s.conf:

Code:
[avery@hedgehoggrifter:/kernel/drv] $ cat vmxnet3s.conf
# Driver.conf(4) file for VMware Vmxnet Generation 3 adapters.

# TxRingSize --
#
#    Tx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
#
#    Minimum value: 32
#    Maximum value: 4096
#
TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;

# RxRingSize --
#
#    Rx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
#
#    Minimum value: 32
#    Maximum value: 4096
#
RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;

# RxBufPoolLimit --
#
#    Limit the number of Rx buffers cached for each vmxnet3s# adapter.
#    Increasing the limit might improve performance but increases the
#    memory footprint.
#
#    Minimum value: 0
#    Maximum value: RxRingSize * 10
#
RxBufPoolLimit=40960,40960,40960,40960,40960,40960,40960,40960,40960,40960;

# EnableLSO --
#
#    Enable or disable LSO for each vmxnet3s# adapter.
#
#    Minimum value: 0
#    Maximum value: 1
#
EnableLSO=1,1,1,1,1,1,1,1,1,1;

# MTU --
#
#    Set MTU for each vmxnet3s# adapter.
#
#    Minimum value: 60
#    Maximum value: 9000
#
#MTU=1500,1500,1500,1500,1500,1500,1500,1500,1500,1500;
That appears to have made iperf3 incoming scores a little worse:

Code:
[avery@hedgehoggrifter:~] $ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.38, port 49648
[  5] local 192.168.1.52 port 5201 connected to 192.168.1.38 port 49650
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   535 MBytes  4.49 Gbits/sec               
[  5]   1.00-2.00   sec   358 MBytes  3.00 Gbits/sec               
[  5]   2.00-3.00   sec   383 MBytes  3.21 Gbits/sec               
[  5]   3.00-4.00   sec   378 MBytes  3.17 Gbits/sec               
[  5]   4.00-5.00   sec   376 MBytes  3.16 Gbits/sec               
[  5]   5.00-6.00   sec   384 MBytes  3.22 Gbits/sec               
[  5]   6.00-7.00   sec   391 MBytes  3.28 Gbits/sec               
[  5]   7.00-8.00   sec   376 MBytes  3.15 Gbits/sec               
[  5]   8.00-9.00   sec   378 MBytes  3.17 Gbits/sec               
[  5]   9.00-10.00  sec   383 MBytes  3.21 Gbits/sec               
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  0.00 Bytes   0.00 bits/sec                 sender
[  5]   0.00-10.00  sec  3.85 GBytes  3.31 Gbits/sec              receiver
I think I may have gone a bit overboard on the RxBufPoolLimit. I'm going to try reducing RxBufPoolLimit to 16384 so it's the same as open-vm-tools' vmxnet3s.conf and see if that improves the situation...

Also, I'm noticing this:

Code:
[avery@hedgehoggrifter:/kernel/drv] $ ipadm show-prop | egrep -i 'send|recv'

icmp  recv_buf              rw   8192         --           8192         4096-262144
icmp  send_buf              rw   8192         --           8192         4096-262144
tcp   recv_buf              rw   128000       --           128000       2048-1048576
tcp   send_buf              rw   49152        --           49152        4096-1048576
udp   recv_buf              rw   57344        --           57344        128-2097152
udp   send_buf              rw   57344        --           57344        1024-2097152
sctp  recv_buf              rw   102400       --           102400       8192-1048576
sctp  send_buf              rw   102400       --           102400       8192-1048576
Looks like there's a lot of wiggle-room there... is the set-prop command persistent?

Also, I tried restarting the svc:/network/interface:default service and /etc/init.d/vmware-tools, but it didn't appear to change the settings (vmxnet3 settings echoed to console upon boot, but not after svc restarts), so I've been rebooting the machine between changes to vmxnet3s.conf - if that is incorrect (restarting services DOES initiate new settings), please let me know.

If anyone has any suggestions, I'd appreciate it. Thanks!

EDIT: I just noticed something interesting - my tcp send and recv_buf are quite a bit different on my established OmniOS fileserver I started using back in 2018 from the OmniOS 151024ce napp-it.ova:

Code:
avery@napp-it01:~$ ipadm show-prop | egrep -i 'send|recv'
icmp  recv_buf              rw   8192         --           8192         4096-262144
icmp  send_buf              rw   8192         --           8192         4096-262144
tcp   recv_buf              rw   2097152      2097152      128000       2048-16777216
tcp   send_buf              rw   2097152      2097152      49152        4096-16777216
udp   recv_buf              rw   57344        --           57344        128-2097152
udp   send_buf              rw   57344        --           57344        1024-2097152
sctp  recv_buf              rw   102400       --           102400       8192-1048576
sctp  send_buf              rw   102400       --           102400       8192-1048576
I'm going to try setting these same values on the new OmniOS VM and see what happens...
 
Last edited:

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
OK I think I figured some of this out...

I set tcp max, send and recv buf to the parameters that were on the napp-it ova:

Code:
[root@napp-it01:~] $ ipadm show-prop | egrep 'max|send|recv' | grep tcp

tcp   max_buf               rw   16777216     16777216     1048576      8192-1073741824
tcp   recv_buf              rw   2097152      2097152      128000       2048-16777216
tcp   send_buf              rw   2097152      2097152      49152        4096-16777216
Setting the max_buf first was necessary in order to increase the range of send + recv_buf. Pretty straightforward...

Rather than doing it by hand, I wrote a tiny script:

Code:
#!/bin/bash -f
# sets:
prop=(max_buf send_buf recv_buf)
MAX=16777216
SEND=2097152
RECV=2097152
val+=($MAX $SEND $RECV)

for i in ${!prop[@]}; do ipadm set-prop -p ${prop[$i]}=${val[$i]} tcp; done
I doubt this was any easier, but I got to practice bash...

It looks like it worked...!

Code:
[root@hedgehoggrifter:~] $ iperf3 -c ubnt

Connecting to host ubnt, port 5201
[  5] local 192.168.1.52 port 50344 connected to 192.168.1.38 port 5201
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  1.03 GBytes  8.83 Gbits/sec              
[  5]   1.00-2.00   sec  1.09 GBytes  9.41 Gbits/sec              
[  5]   2.00-3.00   sec  1.09 GBytes  9.39 Gbits/sec              
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec              
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec              
[  5]   7.00-8.00   sec  1.09 GBytes  9.40 Gbits/sec              
[  5]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   9.00-10.00  sec  1.09 GBytes  9.41 Gbits/sec              
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  10.9 GBytes  9.35 Gbits/sec                  sender
[  5]   0.00-10.00  sec  10.9 GBytes  9.35 Gbits/sec                  receiver

[root@hedgehoggrifter:~] $ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.38, port 53024
[  5] local 192.168.1.52 port 5201 connected to 192.168.1.38 port 53026
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  1009 MBytes  8.46 Gbits/sec              
[  5]   1.00-2.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec              
[  5]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   4.00-5.00   sec  1.10 GBytes  9.42 Gbits/sec              
[  5]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   6.00-7.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec              
[  5]   8.00-9.00   sec  1.09 GBytes  9.39 Gbits/sec              
[  5]   9.00-10.00  sec  1.09 GBytes  9.41 Gbits/sec              
[  5]  10.00-10.00  sec  1.02 MBytes  9.97 Gbits/sec              
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.00  sec  10.8 GBytes 9.31 Gbits/sec                 receiver
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Forgot one last thing - wanted to show my current vmxnet3s.conf settings:

Code:
[root@hedgehoggrifter:~] $ cat /kernel/drv/vmxnet3s.conf | egrep -i 'ring|pool|lso'
# TxRingSize --
#    Tx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
# RxRingSize --
#    Rx ring size for each vmxnet3s# adapter. Must be a multiple of 32.
RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
# RxBufPoolLimit --
#    Maximum value: RxRingSize * 10
RxBufPoolLimit=24576,24576,24576,24576,24576,24576,24576,24576,24576,24576;
# EnableLSO --
#    Enable or disable LSO for each vmxnet3s# adapter.
EnableLSO=1,1,1,1,1,1,1,1,1,1;
I spent 6 hours tuning FreeBSD network to get speeds like this the other day, and I have to say, OmniOS was a LOT easier...

Also, found a great reference: TCP Tunable Parameters - Oracle Solaris Tunable Parameters Reference Manual
 
  • Like
Reactions: gea

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
You can also use napp-it menu System > Appliance Tuning
that allows different setting sets to compare
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Nice feature. I haven't installed napp-it on this VM yet, I am trying not to use any software as a crutch and learn as much about the underlying OS as possible. I will probably install it in the future, though.