Flapping NIC

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

animefans

New Member
Jul 18, 2019
23
5
3
I think that's what it's called?

This is in my dmesg
Jan 24 22:32:44 nappit mac: [ID 435574 kern.info] NOTICE: ixgbe0 link up, 10000 Mbps, full duplex
Jan 24 22:32:45 nappit mac: [ID 486395 kern.info] NOTICE: ixgbe0 link down
Jan 24 22:32:45 nappit mac: [ID 435574 kern.info] NOTICE: ixgbe0 link up, 10000 Mbps, full duplex
Jan 24 22:32:46 nappit mac: [ID 486395 kern.info] NOTICE: ixgbe0 link down
Jan 24 22:32:46 nappit mac: [ID 435574 kern.info] NOTICE: ixgbe0 link up, 10000 Mbps, full duplex
Jan 24 22:32:47 nappit mac: [ID 486395 kern.info] NOTICE: ixgbe0 link down
Jan 24 22:32:47 nappit mac: [ID 435574 kern.info] NOTICE: ixgbe0 link up, 10000 Mbps, full duplex



MB is SM X10SL7-F
NIC is SM AOC-STGN-I2S Rev 2 (the shorter version). AFAIK, this should be very similar, if not the same, as Intel X520-DA2
SFP+ module is SFP-10G-SR. According to Microtik Switch CRS305-1G-4S+, Part number is FTLX8571D3BCL-C2 (and I think this is the issue from my google fu)

From what I read, linux has a driver switch that allow non intel SFP+ module to work with this NIC

and I found a similar flag in /kernel/drv/ixgbe.conf
# allow_unsupported_sfp
# Allow use of unsupported (non-Intel) SFP modules in adapters with
# pluggable optics
# Allowed values: 0 - 1
# Default value: 0


Here's my prtconf output
kkfong@nappit:/var/log$ prtconf -dD |grep Giga
pci15d9,611 (pciex8086,10fb) [Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection], instance #0 (driver name: ixgbe)
pci15d9,611 (pciex8086,10fb) [Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection], instance #1 (driver name: ixgbe)
pci15d9,1533 (pciex8086,1533) [Intel Corporation I210 Gigabit Network Connection], instance #0 (driver name: igb)
pci15d9,1533 (pciex8086,1533) [Intel Corporation I210 Gigabit Network Connection], instance #1 (driver name: igb)


Here's my /etc/path_to_inst for ixgbe
kkfong@nappit:/var/log$ grep ixgbe /etc/path_to_inst
"/pci@0,0/pci8086,c01@1/pci15d9,611@0" 0 "ixgbe"
"/pci@0,0/pci8086,c01@1/pci15d9,611@0,1" 1 "ixgbe"


Here's my ixgbe.conf modification
kkfong@nappit:/var/log$ tail /kernel/drv/ixgbe.conf
#
# name = "pciex8086,10c6" parent = "/pci@0,0/pci10de,\<pci10de\>5d@e" unit-address = "0"
# flow_control = 1;
# name = "pciex8086,10c6" parent = "/pci@0,0/\<pci\>pci10de,5d@e" unit-address = "1"
# flow_control = 3;

name = "pciex8086,10fb" parent = "/pci@0,0/pci8086,\<pci8086\>c01@1" unit-address = "0"
allow_unsupported_sfp = 1;
name = "pciex8086,10fb" parent = "/pci@0,0/\<pci\>pci8086,c01@1" unit-address = "1"
allow_unsupported_sfp = 1;


Here are my various dladm output
kkfong@nappit:/var/log$ dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
ixgbe0 Ethernet up 10000 full ixgbe0
igb0 Ethernet up 1000 full igb0
igb1 Ethernet unknown 0 half igb1
ixgbe1 Ethernet unknown 0 unknown ixgbe1
kkfong@nappit:/var/log$ dladm show-ether
LINK PTYPE STATE AUTO SPEED-DUPLEX PAUSE
ixgbe0 current up yes 10G-f bi
igb0 current up yes 1G-f bi
igb1 current unknown yes 0G-h bi
ixgbe1 current down yes 0G bi
kkfong@nappit:/var/log$ dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
ixgbe0 phys 9000 down -- --
igb0 phys 1500 up -- --
igb1 phys 1500 unknown -- --
ixgbe1 phys 1500 down -- --
kkfong@nappit:/var/log$ dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
ixgbe0 phys 9000 up -- --
igb0 phys 1500 up -- --
igb1 phys 1500 unknown -- --
ixgbe1 phys 1500 down -- --



Am I doing it right?
Is my 10GBE going up and down due to incompatible SFP+ module?
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I have AOC-STGN-I1S in 3x different machines, but I use ESXi and have never seen any issues like that with VMXNET3 driver. Intel NIC probably the most likely to be stable + compatible on any OS, I'm very surprised to see this behavior w/ that NIC.

Why wouldn't Intel-branded NIC be cross-compatible with another vendor's NIC w/ the same chipset? Did you have to mod it to get it to work in the first place?

This might sound dumb, but make sure your cable works well, and is plugged all the way in. If you have another cable you can switch it out with, give it a shot. Start by eliminating all possibilities.

Easiest thing configuration-wise would probably be to upgrade the whole OS, as the ixgbe driver must match kernel version. What version OS are you on? Newest release is 151036.

You could try re-installing the driver in case somehow your current copy got borked for some reason...

Code:
[avery@grubbygardner:/kernel/drv] $ pkg search ixgbe
INDEX       ACTION VALUE                                                              PACKAGE
basename    dir    opt/onbld/closed/root_i386-nd/licenses/usr/src/uts/common/io/ixgbe pkg:/developer/illumos-closed@5.11-151036.0
basename    dir    opt/onbld/closed/root_i386/licenses/usr/src/uts/common/io/ixgbe    pkg:/developer/illumos-closed@5.11-151036.0
driver_name driver ixgbe                                                              pkg:/driver/network/ixgbe@0.5.11-151036.0
basename    file   kernel/drv/amd64/ixgbe                                             pkg:/driver/network/ixgbe@0.5.11-151036.0
pkg.fmri    set    omnios/driver/network/ixgbe                                        pkg:/driver/network/ixgbe@0.5.11-151036.0
I'm not familiar with ixgbe.conf personally, but I did improve my throughput by modifying a combination of vmxnet3.conf and ipadm set-prop. Try an ipadm show-prop and see if there's anything in there that looks like it could have an influence on ixgbe's strange behavior.

Maybe try a dif congestion_control method, toggling lro/tso on/off, etc.

Installing pciutils might be helpful as it supplies lspci:

Code:
[root@grubbygardner:/kernel/drv] $ lspci -nnv | grep -A 10 NET

0b:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller [15ad:07b0] (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller [15ad:07b0]
Flags: bus master, fast devsel, latency 0, IRQ 10
Memory at fd4fc000 (32-bit, non-prefetchable)
Memory at fd4fd000 (32-bit, non-prefetchable)
Memory at fd4fe000 (32-bit, non-prefetchable)
I/O ports at 5000
Capabilities: [40] Power Management version 3
Capabilities: [48] Express Endpoint, MSI 00
Capabilities: [84] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [9c] MSI-X: Enable+ Count=65 Masked-
 
Last edited:

animefans

New Member
Jul 18, 2019
23
5
3
Thanks for your feedback!

Just for the heck of it, instead of using SFP+ port on Microtik Switch CRS305-1G-4S+, I plugged them into my other router : ICX7150-C12P
and no more flapping
NIC stays up

and I can do ping and iperf3 between the two devices that are on the same subnet

It seems like Microtik is picky about the module SFP-10G-SR/FTLX8571D3BCL-C2

Installing pciutils might be helpful as it supplies lspci:
I did try, but it didn't work...

kkfong@nappit:~$ sudo lspci -nnv
lspci: Cannot find any working access method.
kkfong@nappit:~$ cat /etc/*release
NAME="OmniOS"
PRETTY_NAME="OmniOS Community Edition v11 r151036"
CPE_NAME="cpe:/o:eek:mniosce:eek:mnios:11:151036:0"
ID=omnios
VERSION=r151036
VERSION_ID=r151036
BUILD_ID=151036.0.2020.10.29
HOME_URL="https://omniosce.org/"
SUPPORT_URL="https://omniosce.org/"
BUG_REPORT_URL="https://github.com/omniosorg/omnios-build/issues/new"
OmniOS v11 r151036
Copyright (c) 2012-2017 OmniTI Computer Consulting, Inc.
Copyright (c) 2017-2020 OmniOS Community Edition (OmniOSce) Association.
All rights reserved. Use is subject to licence terms.
kkfong@nappit:~$
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Interesting switch. Is that a commscope non-service-vendor line? I didn't think they sold anything to the general public.

Glad it's workout ok for you! Are you going to get a transciever that makes your CRS305 happy? There's this wiki you've probably seen: MikroTik SFP module compatibility table - MikroTik Wiki

Do you happen to know if the CRS305 supports RDMA? Someone said in an amazon review that's what they're using it for, but I can't find any better-sourced info about it and RDMA.

Oh, and what about /kernel/drv/ixgbe.conf - are allow_unsupported_sfp or other mods really necessary? Would like to know how much that makes a difference.

Got any iperf3 speeds you want to post yet? :)

Weird, I wonder why your lspci command wouldn't work... (?) Maybe start another thread (?)

Here's my release:

Code:
[avery@hedgehoggrifter:~] $ cat /etc/*release
NAME="OmniOS"
PRETTY_NAME="OmniOS Community Edition v11 r151036m"
CPE_NAME="cpe:/o:omniosce:omnios:11:151036:13"
ID=omnios
VERSION=r151036m
VERSION_ID=r151036m
BUILD_ID=151036.13.2021.01.19
HOME_URL="https://omniosce.org/"
SUPPORT_URL="https://omniosce.org/"
BUG_REPORT_URL="https://github.com/omniosorg/omnios-build/issues/new"
OmniOS v11 r151036m
Copyright (c) 2012-2017 OmniTI Computer Consulting, Inc.
Copyright (c) 2017-2021 OmniOS Community Edition (OmniOSce) Association.
All rights reserved. Use is subject to licence terms.
Here's my findutils pkg version:

Code:
[root@hedgehoggrifter:~] $ pkg info pciutils
             Name: system/pciutils
Summary: PCI device utilities
Description: Programs (lspci, setpci) for inspecting and manipulating
configuration of PCI devices
State: Installed
Publisher: omnios
Version: 3.7.0
Branch: 151036.1
Packaging Date: December 17, 2020 at 09:09:10 PM
Last Install Time: January 27, 2021 at 09:31:52 PM
Size: 223.40 kB
FMRI: pkg://omnios/system/pciutils@3.7.0-151036.1:20201217T210910Z
Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
Here's its output...:

Code:
[root@hedgehoggrifter:~] $ lspci -v
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 0

00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) (prog-
if 00 [Normal decode])
Flags: bus master, 66MHz, medium devsel, latency 0
Bus: primary=00, secondary=01, subordinate=01, sec-latency=64
I/O behind bridge: [disabled]
Memory behind bridge: c0000000-c00fffff [size=1M]
Prefetchable memory behind bridge: f0000000-f00fffff [size=1M]

00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 0

00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) (prog-if 8a [ISA C
ompatibility mode controller, supports both channels switched to PCI native mode, supports b
us mastering])
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 64, IRQ 255
I/O ports at 1060

00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
Subsystem: VMware Virtual Machine Chipset
Flags: medium devsel

00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
Subsystem: VMware Virtual Machine Communication Interface
Flags: medium devsel, IRQ 9
I/O ports at 1080
Memory at febfe000 (64-bit, non-prefetchable)
Capabilities: [40] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [58] MSI-X: Enable- Count=2 Masked-

00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller])
Subsystem: VMware SVGA II Adapter
Flags: bus master, medium devsel, latency 64, IRQ 9
I/O ports at 1070
Memory at e8000000 (32-bit, prefetchable)
Memory at fe000000 (32-bit, non-prefetchable)
Capabilities: [40] Vendor Specific Information: Len=00 <?>
Capabilities: [44] PCI Advanced Features

00:10.0 SCSI storage controller: Broadcom / LSI 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
(rev 01)
Subsystem: VMware LSI Logic Parallel SCSI Controller
Flags: bus master, medium devsel, latency 64, IRQ 11
I/O ports at 1400
Memory at feba0000 (64-bit, non-prefetchable)
Memory at febc0000 (64-bit, non-prefetchable)
Capabilities: [f8] PCI Advanced Features

00:11.0 PCI bridge: VMware PCI bridge (rev 02) (prog-if 01 [Subtractive decode])
Flags: bus master, medium devsel, latency 64, IRQ 255
Bus: primary=00, secondary=02, subordinate=02, sec-latency=68
I/O behind bridge: 00002000-00003fff [size=8K]
Memory behind bridge: fd600000-fdffffff [size=10M]
Prefetchable memory behind bridge: 00000000e7b00000-00000000e7ffffff [size=5M]
Capabilities: [40] Subsystem: VMware PCI bridge

00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
I/O behind bridge: 00004000-00004fff [size=4K]
Memory behind bridge: fd500000-fd5fffff [size=1M]
Prefetchable memory behind bridge: [disabled]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
I/O behind bridge: 00008000-00008fff [size=4K]
Memory behind bridge: fd100000-fd1fffff [size=1M]
Prefetchable memory behind bridge: 00000000e7900000-00000000e79fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
I/O behind bridge: 0000c000-0000cfff [size=4K]
Memory behind bridge: fcd00000-fcdfffff [size=1M]
Prefetchable memory behind bridge: 00000000e7500000-00000000e75fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc900000-fc9fffff [size=1M]
Prefetchable memory behind bridge: 00000000e7100000-00000000e71fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=07, subordinate=07, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc500000-fc5fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6d00000-00000000e6dfffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=08, subordinate=08, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc100000-fc1fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6900000-00000000e69fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=09, subordinate=09, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fbd00000-fbdfffff [size=1M]
Prefetchable memory behind bridge: 00000000e6500000-00000000e65fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0a, subordinate=0a, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fb900000-fb9fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6100000-00000000e61fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0b, subordinate=0b, sec-latency=0
I/O behind bridge: 00005000-00005fff [size=4K]
Memory behind bridge: fd400000-fd4fffff [size=1M]
Prefetchable memory behind bridge: [disabled]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0c, subordinate=0c, sec-latency=0
I/O behind bridge: 00009000-00009fff [size=4K]
Memory behind bridge: fd000000-fd0fffff [size=1M]
Prefetchable memory behind bridge: 00000000e7800000-00000000e78fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0d, subordinate=0d, sec-latency=0
I/O behind bridge: 0000d000-0000dfff [size=4K]
Memory behind bridge: fcc00000-fccfffff [size=1M]
Prefetchable memory behind bridge: 00000000e7400000-00000000e74fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0e, subordinate=0e, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc800000-fc8fffff [size=1M]
Prefetchable memory behind bridge: 00000000e7000000-00000000e70fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=0f, subordinate=0f, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc400000-fc4fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6c00000-00000000e6cfffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=10, subordinate=10, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc000000-fc0fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6800000-00000000e68fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=11, subordinate=11, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fbc00000-fbcfffff [size=1M]
Prefetchable memory behind bridge: 00000000e6400000-00000000e64fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=12, subordinate=12, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fb800000-fb8fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6000000-00000000e60fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=13, subordinate=13, sec-latency=0
I/O behind bridge: 00006000-00006fff [size=4K]
Memory behind bridge: fd300000-fd3fffff [size=1M]
Prefetchable memory behind bridge: [disabled]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=14, subordinate=14, sec-latency=0
I/O behind bridge: 0000a000-0000afff [size=4K]
Memory behind bridge: fcf00000-fcffffff [size=1M]
Prefetchable memory behind bridge: 00000000e7700000-00000000e77fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=15, subordinate=15, sec-latency=0
I/O behind bridge: 0000e000-0000efff [size=4K]
Memory behind bridge: fcb00000-fcbfffff [size=1M]
Prefetchable memory behind bridge: 00000000e7300000-00000000e73fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=16, subordinate=16, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc700000-fc7fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6f00000-00000000e6ffffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=17, subordinate=17, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc300000-fc3fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6b00000-00000000e6bfffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=18, subordinate=18, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fbf00000-fbffffff [size=1M]
Prefetchable memory behind bridge: 00000000e6700000-00000000e67fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=19, subordinate=19, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fbb00000-fbbfffff [size=1M]
Prefetchable memory behind bridge: 00000000e6300000-00000000e63fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1a, subordinate=1a, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fb700000-fb7fffff [size=1M]
Prefetchable memory behind bridge: 00000000e5f00000-00000000e5ffffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1b, subordinate=1b, sec-latency=0
I/O behind bridge: 00007000-00007fff [size=4K]
Memory behind bridge: fd200000-fd2fffff [size=1M]
Prefetchable memory behind bridge: 00000000e7a00000-00000000e7afffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1c, subordinate=1c, sec-latency=0
I/O behind bridge: 0000b000-0000bfff [size=4K]
Memory behind bridge: fce00000-fcefffff [size=1M]
Prefetchable memory behind bridge: 00000000e7600000-00000000e76fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1d, subordinate=1d, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fca00000-fcafffff [size=1M]
Prefetchable memory behind bridge: 00000000e7200000-00000000e72fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1e, subordinate=1e, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc600000-fc6fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6e00000-00000000e6efffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=1f, subordinate=1f, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fc200000-fc2fffff [size=1M]
Prefetchable memory behind bridge: 00000000e6a00000-00000000e6afffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=20, subordinate=20, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fbe00000-fbefffff [size=1M]
Prefetchable memory behind bridge: 00000000e6600000-00000000e66fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=21, subordinate=21, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fba00000-fbafffff [size=1M]
Prefetchable memory behind bridge: 00000000e6200000-00000000e62fffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 255
Bus: primary=00, secondary=22, subordinate=22, sec-latency=0
I/O behind bridge: [disabled]
Memory behind bridge: fb600000-fb6fffff [size=1M]
Prefetchable memory behind bridge: 00000000e5e00000-00000000e5efffff [size=1M]
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+

03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2
[Falcon] (rev 03)
Subsystem: Broadcom / LSI Device 3060
Flags: bus master, fast devsel, latency 64, IRQ 7
I/O ports at 4000 [disabled]
Memory at fd5fc000 (64-bit, non-prefetchable)
Memory at fd580000 (64-bit, non-prefetchable)
Capabilities: [50] Power Management version 3
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [d0] Vital Product Data
Capabilities: [a8] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable- Count=15 Masked-

0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Flags: bus master, fast devsel, latency 0, IRQ 10
Memory at fd4fc000 (32-bit, non-prefetchable)
Memory at fd4fd000 (32-bit, non-prefetchable)
Memory at fd4fe000 (32-bit, non-prefetchable)
I/O ports at 5000
Capabilities: [40] Power Management version 3
Capabilities: [48] Express Endpoint, MSI 00
Capabilities: [84] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [9c] MSI-X: Enable+ Count=65 Masked-

13:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Flags: bus master, fast devsel, latency 0, IRQ 9
Memory at fd3fc000 (32-bit, non-prefetchable)
Memory at fd3fd000 (32-bit, non-prefetchable)
Memory at fd3fe000 (32-bit, non-prefetchable)
I/O ports at 6000
Capabilities: [40] Power Management version 3
Capabilities: [48] Express Endpoint, MSI 00
Capabilities: [84] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [9c] MSI-X: Enable+ Count=65 Masked-
etc.

Oh well, at least you figured out what the issue was!
 

animefans

New Member
Jul 18, 2019
23
5
3
I learn about that Brocade/Ruckus/Comscope router from non other than this forum :)

I got mine from ebay, and some seller claim it comes "free" as part of their new home setup

I will certainly look for a compatible SFP+ module that will work for the microtik switch (likely fs.com)

This is the first time I have seen RDMA, but you pique my interest and I am gonna look it up :)
I mainly get the switch for the 4 SFP+ port (and silent, so wife approved)

Right now I do have the unsupport sfp flag set. It wasnt working when plug into the microtik, and I forgot to take it out when plugging into the ruckus switch

Bash:
kkfong@nappit:~$ tail /kernel/drv/ixgbe.conf
#
# name = "pciex8086,10c6" parent = "/pci@0,0/pci10de,\<pci10de\>5d@e" unit-address = "0"
# flow_control = 1;
# name = "pciex8086,10c6" parent = "/pci@0,0/\<pci\>pci10de,5d@e" unit-address = "1"
# flow_control = 3;

name = "pciex8086,10fb" parent = "/pci@0,0/pci8086,\<pci8086\>c01@1" unit-address = "0"
allow_unsupported_sfp = 1;
name = "pciex8086,10fb" parent = "/pci@0,0/\<pci\>pci8086,c01@1" unit-address = "1"
allow_unsupported_sfp = 1;
From proxmox to napp-it
Bash:
root@proxmox:~# iperf3 -c 10.10.30.80
Connecting to host 10.10.30.80, port 5201
[  5] local 10.10.30.140 port 43122 connected to 10.10.30.80 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   701 MBytes  5.88 Gbits/sec    0    271 KBytes
[  5]   1.00-2.00   sec   674 MBytes  5.65 Gbits/sec    1    184 KBytes
[  5]   2.00-3.00   sec   692 MBytes  5.80 Gbits/sec    0    184 KBytes
[  5]   3.00-4.00   sec   692 MBytes  5.80 Gbits/sec    0    184 KBytes
[  5]   4.00-5.00   sec   689 MBytes  5.78 Gbits/sec    0    184 KBytes
[  5]   5.00-6.00   sec   686 MBytes  5.76 Gbits/sec    0    184 KBytes
[  5]   6.00-7.00   sec   660 MBytes  5.54 Gbits/sec    1    140 KBytes
[  5]   7.00-8.00   sec   668 MBytes  5.60 Gbits/sec    0    140 KBytes
[  5]   8.00-9.00   sec   666 MBytes  5.59 Gbits/sec    0    140 KBytes
[  5]   9.00-10.00  sec   651 MBytes  5.46 Gbits/sec    0    140 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  6.62 GBytes  5.69 Gbits/sec    2             sender
[  5]   0.00-10.00  sec  6.62 GBytes  5.69 Gbits/sec                  receiver

iperf Done.
root@proxmox:~#
From VM in proxmox to napp-it
Code:
$ sudo iperf3 -c 10.10.30.80
Password:
Connecting to host 10.10.30.80, port 5201
[  5] local 10.10.30.50 port 64542 connected to 10.10.30.80 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   461 MBytes  3.86 Gbits/sec    0    131 KBytes
[  5]   1.00-2.00   sec   463 MBytes  3.89 Gbits/sec    0    131 KBytes
[  5]   2.00-3.00   sec   462 MBytes  3.87 Gbits/sec    0    131 KBytes
[  5]   3.00-4.00   sec   462 MBytes  3.87 Gbits/sec    0    131 KBytes
[  5]   4.00-5.00   sec   460 MBytes  3.86 Gbits/sec    0    131 KBytes
[  5]   5.00-6.00   sec   464 MBytes  3.89 Gbits/sec    0    131 KBytes
[  5]   6.00-7.00   sec   460 MBytes  3.86 Gbits/sec    0    131 KBytes
[  5]   7.00-8.00   sec   455 MBytes  3.82 Gbits/sec    0    131 KBytes
[  5]   8.00-9.00   sec   461 MBytes  3.87 Gbits/sec    0    131 KBytes
[  5]   9.00-10.00  sec   448 MBytes  3.76 Gbits/sec    0    131 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.49 GBytes  3.86 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  4.49 GBytes  3.86 Gbits/sec                  receiver

iperf Done.
Reverse iperf3 from napp-it to proxmox
Code:
root@proxmox:~# iperf3 -c 10.10.30.80 -R
Connecting to host 10.10.30.80, port 5201
Reverse mode, remote host 10.10.30.80 is sending
[  5] local 10.10.30.140 port 43126 connected to 10.10.30.80 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.15 GBytes  9.87 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.84 Gbits/sec
[  5]   3.00-4.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   5.00-6.00   sec  1.14 GBytes  9.83 Gbits/sec
[  5]   6.00-7.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   7.00-8.00   sec  1.13 GBytes  9.74 Gbits/sec
[  5]   8.00-9.00   sec  1.14 GBytes  9.79 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.89 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  11.5 GBytes  9.85 Gbits/sec                  sender
[  5]   0.00-10.00  sec  11.5 GBytes  9.85 Gbits/sec                  receiver

iperf Done.
root@proxmox:~#
Proxmox to napp-it 3 streams
Code:
root@proxmox:~# iperf3 -c 10.10.30.80 -P 3
Connecting to host 10.10.30.80, port 5201
[  5] local 10.10.30.140 port 43156 connected to 10.10.30.80 port 5201
[  7] local 10.10.30.140 port 43158 connected to 10.10.30.80 port 5201
[  9] local 10.10.30.140 port 43160 connected to 10.10.30.80 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   402 MBytes  3.37 Gbits/sec    0    271 KBytes
[  7]   0.00-1.00   sec   368 MBytes  3.09 Gbits/sec    0    271 KBytes
[  9]   0.00-1.00   sec   365 MBytes  3.06 Gbits/sec    0    271 KBytes
[SUM]   0.00-1.00   sec  1.11 GBytes  9.52 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec   407 MBytes  3.41 Gbits/sec    0    271 KBytes
[  7]   1.00-2.00   sec   363 MBytes  3.04 Gbits/sec    0    271 KBytes
[  9]   1.00-2.00   sec   363 MBytes  3.05 Gbits/sec    0    271 KBytes
[SUM]   1.00-2.00   sec  1.11 GBytes  9.51 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec   399 MBytes  3.35 Gbits/sec    0    271 KBytes
[  7]   2.00-3.00   sec   367 MBytes  3.08 Gbits/sec    0    271 KBytes
[  9]   2.00-3.00   sec   366 MBytes  3.07 Gbits/sec    0    271 KBytes
[SUM]   2.00-3.00   sec  1.11 GBytes  9.50 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec   399 MBytes  3.35 Gbits/sec    0    271 KBytes
[  7]   3.00-4.00   sec   369 MBytes  3.10 Gbits/sec    0    271 KBytes
[  9]   3.00-4.00   sec   367 MBytes  3.08 Gbits/sec    0    271 KBytes
[SUM]   3.00-4.00   sec  1.11 GBytes  9.52 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec   402 MBytes  3.37 Gbits/sec    0    271 KBytes
[  7]   4.00-5.00   sec   365 MBytes  3.07 Gbits/sec    0    271 KBytes
[  9]   4.00-5.00   sec   368 MBytes  3.09 Gbits/sec    0    271 KBytes
[SUM]   4.00-5.00   sec  1.11 GBytes  9.52 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec   404 MBytes  3.39 Gbits/sec    0    271 KBytes
[  7]   5.00-6.00   sec   366 MBytes  3.07 Gbits/sec    0    271 KBytes
[  9]   5.00-6.00   sec   364 MBytes  3.05 Gbits/sec    0    271 KBytes
[SUM]   5.00-6.00   sec  1.11 GBytes  9.51 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec   402 MBytes  3.37 Gbits/sec    0    271 KBytes
[  7]   6.00-7.00   sec   367 MBytes  3.08 Gbits/sec    0    271 KBytes
[  9]   6.00-7.00   sec   364 MBytes  3.05 Gbits/sec    0    271 KBytes
[SUM]   6.00-7.00   sec  1.11 GBytes  9.51 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec   401 MBytes  3.37 Gbits/sec    0    271 KBytes
[  7]   7.00-8.00   sec   367 MBytes  3.08 Gbits/sec    0    271 KBytes
[  9]   7.00-8.00   sec   364 MBytes  3.06 Gbits/sec    0    271 KBytes
[SUM]   7.00-8.00   sec  1.11 GBytes  9.50 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   8.00-9.00   sec   401 MBytes  3.36 Gbits/sec    0    271 KBytes
[  7]   8.00-9.00   sec   368 MBytes  3.09 Gbits/sec    0    271 KBytes
[  9]   8.00-9.00   sec   368 MBytes  3.09 Gbits/sec    0    271 KBytes
[SUM]   8.00-9.00   sec  1.11 GBytes  9.53 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   9.00-10.00  sec   409 MBytes  3.43 Gbits/sec    0    271 KBytes
[  7]   9.00-10.00  sec   350 MBytes  2.93 Gbits/sec    0    271 KBytes
[  9]   9.00-10.00  sec   376 MBytes  3.15 Gbits/sec    0    271 KBytes
[SUM]   9.00-10.00  sec  1.11 GBytes  9.52 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  3.93 GBytes  3.38 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.93 GBytes  3.38 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  3.56 GBytes  3.06 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  3.56 GBytes  3.06 Gbits/sec                  receiver
[  9]   0.00-10.00  sec  3.58 GBytes  3.07 Gbits/sec    0             sender
[  9]   0.00-10.00  sec  3.58 GBytes  3.07 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  11.1 GBytes  9.51 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec  11.1 GBytes  9.51 Gbits/sec                  receiver

iperf Done.
Thanks for posting your pciutil version!
Code:
kkfong@nappit:~$ sudo pkg info pciutils
             Name: system/pciutils
          Summary: PCI device utilities
      Description: Programs (lspci, setpci) for inspecting and manipulating
                   configuration of PCI devices
            State: Installed
        Publisher: omnios
          Version: 3.7.0
           Branch: 151036.0
   Packaging Date: Thu Oct 29 17:02:26 2020
Last Install Time: Sun Jan 24 09:37:12 2021
             Size: 258.88 kB
             FMRI: pkg://omnios/system/pciutils@3.7.0-151036.0:20201029T170226Z
       Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
Time to hunt down your version :)
 
  • Like
Reactions: AveryFreeman

animefans

New Member
Jul 18, 2019
23
5
3
Just found out this is a thing in Omni OS :)

Code:
pkg update
After updating to latest 151036m, lspci is working
Code:
kkfong@nappit:~$ sudo lspci -nnv |grep Giga
01:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
01:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
05:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
06:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I learn about that Brocade/Ruckus/Comscope router from non other than this forum :)
I'll have to check them out. Meanwhile, though - what's with your iperf3 scores? Single thread should hit over 9Gbps.

This is an E5-1630L v3 server (slow, 1.8GHz lol) running FreeBSD to an E3-1230 v2 server running OmniOS 151036. Both are VMs on ESXi 7.0U1. I have the same Supermicro 82599 NICs you do, but single port. My "10Gb switch" are uplink ports on a Dell Powerconnect 7048P.

Code:
[avery@hedgehoggrifter:~] $ iperf3 -c fabby
Connecting to host fabby, port 5201
[ 5] local 192.168.1.52 port 54910 connected to 192.168.1.71 port 5201
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 1.01 GBytes 8.68 Gbits/sec
[ 5] 1.00-2.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 2.00-3.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 3.00-4.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 5] 4.00-5.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 6.00-7.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 7.00-8.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 5] 8.00-9.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 9.00-10.00 sec 1.10 GBytes 9.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 10.9 GBytes 9.33 Gbits/sec sender
[ 5] 0.00-10.00 sec 10.9 GBytes 9.33 Gbits/sec receiver
Thankfully OmniOS is a lot easier to optimize than FreeBSD was. What does this show on your OmniOS system?

Code:
[avery@hedgehoggrifter:~] $ ipadm show-prop | egrep -i 'max|recv|send' | grep tcp  

tcp   max_buf               rw   16777216     16777216     1048576      8192-1073741824
tcp recv_buf rw 2097152 2097152 128000 2048-16777216
tcp send_buf rw 2097152 2097152 49152 4096-16777216
If you want a quick way to increase these values, I wrote a little script you can run (don't laugh, I'm still learning! ;) ):

Code:
#!/bin/sh -f
prop=(max_buf send_buf recv_buf)
val=(16777216 2097152 2097152)
# sets:
# TCP_MAX_BUF=16777216
# TCP_RECV_BUF=2097152
# TCP_SEND_BUF=2097152

for i in {0..2}; do ipadm set-prop -p ${prop[$i]}=${val[$i]} tcp; done
Hopefully your OmniOS system is the weak link and you won't have to toil over Debian to figure out what's going on w/ your Proxmox box.

Thankfully I didn't have to do any tuning on this Ubuntu VM on the same host as the FreeBSD VM from the last test on, so maybe you won't have to do anything tuning either:

Code:
avery@drbd01:~$ iperf3 -c hedgehoggrifter
Connecting to host hedgehoggrifter, port 5201
[ 5] local 192.168.1.28 port 50380 connected to 192.168.1.52 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1004 MBytes 8.42 Gbits/sec 549 950 KBytes
[ 5] 1.00-2.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.46 MBytes
[ 5] 2.00-3.00 sec 1.09 GBytes 9.35 Gbits/sec 429 1.19 MBytes
[ 5] 3.00-4.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.63 MBytes
[ 5] 4.00-5.00 sec 1.09 GBytes 9.37 Gbits/sec 1115 1.25 MBytes
[ 5] 5.00-6.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.66 MBytes
[ 5] 6.00-7.00 sec 1.08 GBytes 9.31 Gbits/sec 478 1.30 MBytes
[ 5] 7.00-8.00 sec 1.09 GBytes 9.32 Gbits/sec 262 1.18 MBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.36 Gbits/sec 1092 987 KBytes
[ 5] 9.00-10.00 sec 1.09 GBytes 9.37 Gbits/sec 64 1.20 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.8 GBytes 9.28 Gbits/sec 3989 sender
[ 5] 0.00-10.00 sec 10.8 GBytes 9.27 Gbits/sec receiver
I got mine from ebay, and some seller claim it comes "free" as part of their new home setup
Ohhh, that makes sense, because comscope usually only makes stuff for Comcast (around here - I'm near Seattle).

I will certainly look for a compatible SFP+ module that will work for the microtik switch (likely fs.com)

This is the first time I have seen RDMA, but you pique my interest and I am gonna look it up :)
fs is great! Can find cheap name-brands on ebay, too. I've bought dell and intel LC transceivers on ebay for less than $10/ea.

Proxmox to napp-it 3 streams
Code:
root@proxmox:~# iperf3 -c 10.10.30.80 -P 3
Thanks for your iperf3 test recommendations! I seem to remember seeing a good how-to article on it somewhere, but I usually just run the really basic test... I am still pretty new to network optimization, in the past I've only run iperf3 to make sure everything is working.

OmniOS VM to Ubuntu 20.04 VM:

Code:
[avery@hedgehoggrifter:~] $ iperf3 iperf3 -c drbd01 -P 3      
      
Connecting to host drbd01, port 5201
[ 5] local 192.168.1.52 port 46899 connected to 192.168.1.28 port 5201
[ 7] local 192.168.1.52 port 50993 connected to 192.168.1.28 port 5201
[ 9] local 192.168.1.52 port 44952 connected to 192.168.1.28 port 5201
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 208 MBytes 1.75 Gbits/sec
[ 7] 0.00-1.00 sec 561 MBytes 4.70 Gbits/sec
[ 9] 0.00-1.00 sec 275 MBytes 2.31 Gbits/sec
[SUM] 0.00-1.00 sec 1.02 GBytes 8.76 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 226 MBytes 1.89 Gbits/sec
[ 7] 1.00-2.00 sec 597 MBytes 5.00 Gbits/sec
[ 9] 1.00-2.00 sec 300 MBytes 2.51 Gbits/sec
[SUM] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 2.00-3.00 sec 230 MBytes 1.93 Gbits/sec
[ 7] 2.00-3.00 sec 589 MBytes 4.94 Gbits/sec
[ 9] 2.00-3.00 sec 301 MBytes 2.53 Gbits/sec
[SUM] 2.00-3.00 sec 1.09 GBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 3.00-4.00 sec 229 MBytes 1.92 Gbits/sec
[ 7] 3.00-4.00 sec 470 MBytes 3.94 Gbits/sec
[ 9] 3.00-4.00 sec 420 MBytes 3.52 Gbits/sec
[SUM] 3.00-4.00 sec 1.09 GBytes 9.38 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 4.00-5.00 sec 232 MBytes 1.94 Gbits/sec
[ 7] 4.00-5.00 sec 420 MBytes 3.52 Gbits/sec
[ 9] 4.00-5.00 sec 469 MBytes 3.94 Gbits/sec
[SUM] 4.00-5.00 sec 1.09 GBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 5.00-6.00 sec 241 MBytes 2.02 Gbits/sec
[ 7] 5.00-6.00 sec 417 MBytes 3.50 Gbits/sec
[ 9] 5.00-6.00 sec 465 MBytes 3.90 Gbits/sec
[SUM] 5.00-6.00 sec 1.10 GBytes 9.42 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 6.00-7.00 sec 250 MBytes 2.10 Gbits/sec
[ 7] 6.00-7.00 sec 414 MBytes 3.47 Gbits/sec
[ 9] 6.00-7.00 sec 457 MBytes 3.84 Gbits/sec
[SUM] 6.00-7.00 sec 1.10 GBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 7.00-8.00 sec 260 MBytes 2.18 Gbits/sec
[ 7] 7.00-8.00 sec 409 MBytes 3.43 Gbits/sec
[ 9] 7.00-8.00 sec 454 MBytes 3.81 Gbits/sec
[SUM] 7.00-8.00 sec 1.10 GBytes 9.42 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 8.00-9.00 sec 263 MBytes 2.21 Gbits/sec
[ 7] 8.00-9.00 sec 408 MBytes 3.42 Gbits/sec
[ 9] 8.00-9.00 sec 448 MBytes 3.76 Gbits/sec
[SUM] 8.00-9.00 sec 1.09 GBytes 9.39 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 9.00-10.00 sec 272 MBytes 2.28 Gbits/sec
[ 7] 9.00-10.00 sec 407 MBytes 3.42 Gbits/sec
[ 9] 9.00-10.00 sec 444 MBytes 3.73 Gbits/sec
[SUM] 9.00-10.00 sec 1.10 GBytes 9.42 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 2.35 GBytes 2.02 Gbits/sec sender
[ 5] 0.00-10.00 sec 2.35 GBytes 2.02 Gbits/sec receiver
[ 7] 0.00-10.00 sec 4.58 GBytes 3.93 Gbits/sec sender
[ 7] 0.00-10.00 sec 4.58 GBytes 3.93 Gbits/sec receiver
[ 9] 0.00-10.00 sec 3.94 GBytes 3.38 Gbits/sec sender
[ 9] 0.00-10.00 sec 3.94 GBytes 3.38 Gbits/sec receiver
[SUM] 0.00-10.00 sec 10.9 GBytes 9.34 Gbits/sec sender
[SUM] 0.00-10.00 sec 10.9 GBytes 9.34 Gbits/sec receiver
OmniOS VM to FreeBSD 12.2-RELEASE VM:

Code:
[avery@hedgehoggrifter:~] $ iperf3 iperf3 -c fabby -P 3

Connecting to host fabby, port 5201
[ 5] local 192.168.1.52 port 56823 connected to 192.168.1.71 port 5201
[ 7] local 192.168.1.52 port 58230 connected to 192.168.1.71 port 5201
[ 9] local 192.168.1.52 port 63211 connected to 192.168.1.71 port 5201
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 294 MBytes 2.46 Gbits/sec
[ 7] 0.00-1.00 sec 293 MBytes 2.46 Gbits/sec
[ 9] 0.00-1.00 sec 436 MBytes 3.66 Gbits/sec
[SUM] 0.00-1.00 sec 1024 MBytes 8.58 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 338 MBytes 2.84 Gbits/sec
[ 7] 1.00-2.00 sec 330 MBytes 2.77 Gbits/sec
[ 9] 1.00-2.00 sec 454 MBytes 3.82 Gbits/sec
[SUM] 1.00-2.00 sec 1.10 GBytes 9.43 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 2.00-3.00 sec 344 MBytes 2.88 Gbits/sec
[ 7] 2.00-3.00 sec 335 MBytes 2.81 Gbits/sec
[ 9] 2.00-3.00 sec 442 MBytes 3.71 Gbits/sec
[SUM] 2.00-3.00 sec 1.09 GBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 3.00-4.00 sec 346 MBytes 2.90 Gbits/sec
[ 7] 3.00-4.00 sec 342 MBytes 2.87 Gbits/sec
[ 9] 3.00-4.00 sec 434 MBytes 3.64 Gbits/sec
[SUM] 3.00-4.00 sec 1.09 GBytes 9.40 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 4.00-5.00 sec 350 MBytes 2.94 Gbits/sec
[ 7] 4.00-5.00 sec 344 MBytes 2.89 Gbits/sec
[ 9] 4.00-5.00 sec 426 MBytes 3.57 Gbits/sec
[SUM] 4.00-5.00 sec 1.09 GBytes 9.39 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 5.00-6.00 sec 353 MBytes 2.97 Gbits/sec
[ 7] 5.00-6.00 sec 346 MBytes 2.91 Gbits/sec
[ 9] 5.00-6.00 sec 422 MBytes 3.54 Gbits/sec
[SUM] 5.00-6.00 sec 1.10 GBytes 9.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 6.00-7.00 sec 354 MBytes 2.97 Gbits/sec
[ 7] 6.00-7.00 sec 349 MBytes 2.93 Gbits/sec
[ 9] 6.00-7.00 sec 419 MBytes 3.51 Gbits/sec
[SUM] 6.00-7.00 sec 1.10 GBytes 9.42 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 7.00-8.00 sec 354 MBytes 2.97 Gbits/sec
[ 7] 7.00-8.00 sec 352 MBytes 2.95 Gbits/sec
[ 9] 7.00-8.00 sec 418 MBytes 3.50 Gbits/sec
[SUM] 7.00-8.00 sec 1.10 GBytes 9.42 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 8.00-9.00 sec 353 MBytes 2.97 Gbits/sec
[ 7] 8.00-9.00 sec 350 MBytes 2.94 Gbits/sec
[ 9] 8.00-9.00 sec 415 MBytes 3.49 Gbits/sec
[SUM] 8.00-9.00 sec 1.09 GBytes 9.39 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 9.00-10.00 sec 356 MBytes 2.99 Gbits/sec
[ 7] 9.00-10.00 sec 354 MBytes 2.97 Gbits/sec
[ 9] 9.00-10.00 sec 414 MBytes 3.47 Gbits/sec
[SUM] 9.00-10.00 sec 1.10 GBytes 9.43 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 3.36 GBytes 2.89 Gbits/sec sender
[ 5] 0.00-10.00 sec 3.36 GBytes 2.89 Gbits/sec receiver
[ 7] 0.00-10.00 sec 3.32 GBytes 2.85 Gbits/sec sender
[ 7] 0.00-10.00 sec 3.32 GBytes 2.85 Gbits/sec receiver
[ 9] 0.00-10.00 sec 4.18 GBytes 3.59 Gbits/sec sender
[ 9] 0.00-10.00 sec 4.18 GBytes 3.59 Gbits/sec receiver
[SUM] 0.00-10.00 sec 10.9 GBytes 9.33 Gbits/sec sender
[SUM] 0.00-10.00 sec 10.9 GBytes 9.32 Gbits/sec receiver
TBH I'm not really sure how to interpret that yet, I'll have to do some reading...

Thanks for posting your pciutil version!
Code:
kkfong@nappit:~$ sudo pkg info pciutils
          Version: 3.7.0
Time to hunt down your version :)
Isn't that the same version?
 

animefans

New Member
Jul 18, 2019
23
5
3
snipped for brievity
I already tried your max|send|receive setting, and somehow it's not working for me
I need to find more time to fiddle with it
My box is down for now, so can't provide any data...

As far as pciutils version, yours is from latest (note Branch 151036.1)
Code:
[root@hedgehoggrifter:~] $ pkg info pciutils
             Name: system/pciutils
Summary: PCI device utilities
Description: Programs (lspci, setpci) for inspecting and manipulating
configuration of PCI devices
State: Installed
Publisher: omnios
Version: 3.7.0
Branch: 151036.1
Packaging Date: December 17, 2020 at 09:09:10 PM
Last Install Time: January 27, 2021 at 09:31:52 PM
Size: 223.40 kB
FMRI: pkg://omnios/system/pciutils@3.7.0-151036.1:20201217T210910Z
Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
while mine was from clean ISO install (branch 151036.0)
Code:
kkfong@nappit:~$ sudo pkg info pciutils
             Name: system/pciutils
          Summary: PCI device utilities
      Description: Programs (lspci, setpci) for inspecting and manipulating
                   configuration of PCI devices
            State: Installed
        Publisher: omnios
          Version: 3.7.0
           Branch: 151036.0
   Packaging Date: Thu Oct 29 17:02:26 2020
Last Install Time: Sun Jan 24 09:37:12 2021
             Size: 258.88 kB
             FMRI: pkg://omnios/system/pciutils@3.7.0-151036.0:20201029T170226Z
       Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
After I install to latest, lspci works
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I already tried your max|send|receive setting, and somehow it's not working for me
I need to find more time to fiddle with it
My box is down for now, so can't provide any data...

As far as pciutils version, yours is from latest (note Branch 151036.1)
Code:
[root@hedgehoggrifter:~] $ pkg info pciutils
             Name: system/pciutils
Summary: PCI device utilities
Description: Programs (lspci, setpci) for inspecting and manipulating
configuration of PCI devices
State: Installed
Publisher: omnios
Version: 3.7.0
Branch: 151036.1
Packaging Date: December 17, 2020 at 09:09:10 PM
Last Install Time: January 27, 2021 at 09:31:52 PM
Size: 223.40 kB
FMRI: pkg://omnios/system/pciutils@3.7.0-151036.1:20201217T210910Z
Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
while mine was from clean ISO install (branch 151036.0)
Code:
kkfong@nappit:~$ sudo pkg info pciutils
             Name: system/pciutils
          Summary: PCI device utilities
      Description: Programs (lspci, setpci) for inspecting and manipulating
                   configuration of PCI devices
            State: Installed
        Publisher: omnios
          Version: 3.7.0
           Branch: 151036.0
   Packaging Date: Thu Oct 29 17:02:26 2020
Last Install Time: Sun Jan 24 09:37:12 2021
             Size: 258.88 kB
             FMRI: pkg://omnios/system/pciutils@3.7.0-151036.0:20201029T170226Z
       Source URL: https://mirrors.omniosce.org/pciutils/pciutils-3.7.0.tar.xz
After I install to latest, lspci works
Well I'm glad lspci worked out for you!

As far as those values for ipadm, I was adjusting my vmxnet3 controller (since my cards are connected to host). The values could certainly vary.

Here's someone working w/ Solaris bare-metal: NFS mount options

Solaris

Tune network

Code:
ipadm set-prop -p recv_buf=400000 tcp
ipadm set-prop -p send_buf=400000 tcp
ipadm set-prop -p max_buf=2097152 tcp
ipadm set-prop -p _cwnd_max=2097152 tcp
ipadm set-prop -p _conn_req_max_q=512 tcp
Add following options to /etc/system

Code:
* - NFSSRV MODULE
* Controls the number of TCP connections that the NFS client
* uses when communicating with each NFS server.
* https://docs.oracle.com/cd/E19683-01/806-7009/chapter3-26/index.html
set rpcmod:clnt_max_conns = 8
set nfs:nfs3_max_threads=256
set nfs:nfs4_max_threads=256

If you use ixgbe driver also tune it in /etc/driver/drv/ixgbe.conf
#
# -------------------- Flow Control --------------------
# flow_control
# Ethernet flow control
# Allowed values: 0 - Disable
# 1 - Receive only
# 2 - Transmit only
# 3 - Receive and transmit
# default value: 0
#
flow_control = 0;

#
# -------------------- Transmit/Receive Queues --------------------
#
# tx_ring_size
# The number of the transmit descriptors per transmit queue
# Allowed values: 64 - 4096
# Default value: 1024
tx_ring_size = 4096;

#
# rx_ring_size
# The number of the receive descriptors per receive queue
# Allowed values: 64 - 4096
# Default value: 1024
rx_ring_size = 4096;

# https://docs.oracle.com/cd/E36784_01/html/E36845/gipaf.html#SOLTUNEPARAMREFgikws
# Description
# This parameter controls the number of transmit queues that are used by the ixgbe network driver.
tx_queue_number = 16;
rx_queue_number = 16;

# Description
# This parameter controls the maximum number of receive queue buffer descriptors per interrupt that are used by the ixgbe network driver.
# You can increase the number of receive queue buffer descriptors by increasing the value of this parameter
rx_limit_per_intr = 1024;
tx_copy_threshold = 1024;
rx_copy_threshold = 512;

#
# mr_enable
# Enable multiple tx queues and rx queues
# Allowed values: 0 - 1
# Default value: 1
# https://docs.oracle.com/cd/E19120-01/open.solaris/819-2724/gipao/index.html
mr_enable = 0;

#
# rx_group_number
# The number of the receive groups
# Allowed values: 1 - 16 (for Intel 82598 10Gb ethernet controller)
# Allowed values: 1 - 64 (for Intel 82599/X540 10Gb ethernet controller)
# Default value: 1
rx_group_number = 8;
Perhaps this could be helpful, too: [OmniOS-discuss] [developer] Re: The ixgbe driver, Lindsay Lohan, and the Greek economy

Looking forward to some new iperf3 test results :)