I have a couple questions for those of you running Proxmox on top of those devices. Do you passthrough the NIC interfaces to soft BSD Routers / VMs or create a linux bridge in proxmox and "serve" that as virtio to the VM? I am noticing significant cpu loads while downloading large stuff at full bandwidth (currently "only" 200mbps, through https/ftp).
My setup:
Topton p7505 4x i225-v b3
Proxmox VE 7.2-11 with kernel 5.19.7-2
VT-d enabled in bios, intel_iommu enabled in grub
running dmesg | grep -e DMAR -e IOMMU shows iommu enabled
Opnsense VM with two physical NICs passed through for Lan and Wan.
cpu load in Proxmox idles at ~3%, with a "normal" https download active it spikes to 20-24%. The OPNsense VM itself has 2 cpus assigned is running at 41-44% cpu. Usual firewall rules and no Intrusion Detection / suricata running.
I've tried running iperf3, the server is called from the opnsense instance (console), client on another proxmox host. I get "only" 715mbit/s throughtput while the OPN vm hits 90% cpu all the time the test is running. In opnsense, interface settings, I've left enabled hardware CRC, TSO and LRO.
Code:
root@PvE:~# iperf3 -c 192.168.50.1
Connecting to host 192.168.50.1, port 5201
[ 5] local 192.168.50.251 port 38100 connected to 192.168.50.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 110 MBytes 919 Mbits/sec 0 904 KBytes
[ 5] 1.00-2.00 sec 80.0 MBytes 671 Mbits/sec 56 1.19 MBytes
[ 5] 2.00-3.00 sec 86.2 MBytes 724 Mbits/sec 0 1.30 MBytes
[ 5] 3.00-4.00 sec 83.8 MBytes 703 Mbits/sec 0 1.38 MBytes
[ 5] 4.00-5.00 sec 81.2 MBytes 682 Mbits/sec 0 1.44 MBytes
[ 5] 5.00-6.00 sec 81.2 MBytes 682 Mbits/sec 2 1.07 MBytes
[ 5] 6.00-7.00 sec 87.5 MBytes 734 Mbits/sec 0 1.14 MBytes
[ 5] 7.00-8.00 sec 77.5 MBytes 650 Mbits/sec 0 1.18 MBytes
[ 5] 8.00-9.00 sec 78.8 MBytes 661 Mbits/sec 0 1.21 MBytes
[ 5] 9.00-10.00 sec 86.2 MBytes 724 Mbits/sec 0 1.23 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 852 MBytes 715 Mbits/sec 58 sender
[ 5] 0.00-10.00 sec 849 MBytes 712 Mbits/sec receiver
iperf Done.
Out of curiosity I've hosted the Iperf server on the main node instead of the OPN VM, and I'm getting the full 935mbit/s expected from the nic running at gigabit speeds.
Any Idea on how could I improve the performance of the system? I worry once I get a 1gbit line I might get capped by the system itself.