I am trying to use SR-IOV Passthrough for a Mellanox ConnectX-4 on an ESXi 7.0.3 host.
When starting the VM, I see the following error:
Dec 7 21:43:02 nappit2 mlxcx: [ID 989156 kern.warning] WARNING: mlxcx1: command MLXCX_OP_CREATE_EQ 0x301 failed with status code MLXCX_CMD_R_BAD_PARAM (0x3)
No network adapters are created. And 'dladm show-link' is empty.
I think I have it setup fine on the hypervisor end, as I see 8 virtual functions for the nic and sr-iov passthrough on other VMs like Ubuntu and work fine.
Hardware
VMware ESXi, 7.0.3, 18644231
SuperStorage 6048R-E1CR36N
Supermicro X10DRi-T4+
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
256GB DDR4 ECC 2400Mhz
Mellanox Technologies ConnectX-4 Lx EN NIC; 50GbE; single-port QSFP28; (MCX4131A-GCA), SR-IOV Enabled
steps I was using:
OmniOS version:
root@nappit2:~# cat /etc/os-release
NAME="OmniOS"
PRETTY_NAME="OmniOS Community Edition v11 r151034e"
CPE_NAME="cpe:/omnioscemnios:11:151034:5"
ID=omnios
VERSION=r151034e
VERSION_ID=r151034e
BUILD_ID=151034.5.2020.06.01
HOME_URL="https://omniosce.org/"
SUPPORT_URL="https://omniosce.org/"
BUG_REPORT_URL="https://github.com/omniosorg/omnios-build/issues/new"
BTW some background, I did try just using VMXNET 3 instead, but when running iperf3 test from an ubuntu VM host client (which is using SR-IOV passthrough on the same adapter) to napp-it iperf3 server (both configured to use 8 vcpu each), the ubuntu host sits at like less than 10% cpu, while the omnios host hits 100% and the max bandwidth transfer in the iperf3 test is 10.6Gbps. The nic is connected to a Brocade ICX6650 switch. I suspect that cpu pegged usage on the omnios host is entire due to using the virtual nic, and I'm trying to switch it to use SR-IOV passthrough instead but am running up against the above errors in the os.
Any help?
When starting the VM, I see the following error:
Dec 7 21:43:02 nappit2 mlxcx: [ID 989156 kern.warning] WARNING: mlxcx1: command MLXCX_OP_CREATE_EQ 0x301 failed with status code MLXCX_CMD_R_BAD_PARAM (0x3)
No network adapters are created. And 'dladm show-link' is empty.
I think I have it setup fine on the hypervisor end, as I see 8 virtual functions for the nic and sr-iov passthrough on other VMs like Ubuntu and work fine.
Hardware
VMware ESXi, 7.0.3, 18644231
SuperStorage 6048R-E1CR36N
Supermicro X10DRi-T4+
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
256GB DDR4 ECC 2400Mhz
Mellanox Technologies ConnectX-4 Lx EN NIC; 50GbE; single-port QSFP28; (MCX4131A-GCA), SR-IOV Enabled
steps I was using:
- Edit VM
- Add Network Adapter
- Select VM40 Network(this has the vswitch with the mellanox 40gb nic assigned)
- Changed adapter type to SR-IOV Passthrough
- Leave Physical Function to Automatic
- Ok, boot VM.
OmniOS version:
root@nappit2:~# cat /etc/os-release
NAME="OmniOS"
PRETTY_NAME="OmniOS Community Edition v11 r151034e"
CPE_NAME="cpe:/omnioscemnios:11:151034:5"
ID=omnios
VERSION=r151034e
VERSION_ID=r151034e
BUILD_ID=151034.5.2020.06.01
HOME_URL="https://omniosce.org/"
SUPPORT_URL="https://omniosce.org/"
BUG_REPORT_URL="https://github.com/omniosorg/omnios-build/issues/new"
BTW some background, I did try just using VMXNET 3 instead, but when running iperf3 test from an ubuntu VM host client (which is using SR-IOV passthrough on the same adapter) to napp-it iperf3 server (both configured to use 8 vcpu each), the ubuntu host sits at like less than 10% cpu, while the omnios host hits 100% and the max bandwidth transfer in the iperf3 test is 10.6Gbps. The nic is connected to a Brocade ICX6650 switch. I suspect that cpu pegged usage on the omnios host is entire due to using the virtual nic, and I'm trying to switch it to use SR-IOV passthrough instead but am running up against the above errors in the os.
Any help?
Last edited: