Here the specific of my order, with the dated invoice.
I first ran OPNSense bare metal, and then virtualized.
Identical performance, and the ability to clone the OPN VM and manage it more easily with PVE.
Keeping PVE up to date is a bit of a dance, since I need the OPN VM for internet access...
I have been running this exact configuration:
Proxmox VE 8 with OPNSense virtualized.
I pass through both SFP+ ports and get the full 9.7 GbE bandwidth expected (via iperf3).
Only issue I had was that my multi-gig connection into the house, via Fiber, uses a very old Cat5e cable that was not...
I have been studying the source code changes from 2.19.2 to 2.20.1.
I know nothing about writing kernel drivers, and no even less about the specifics of these devices, but it is pretty clear that a bunch of error checking and guard conditions have been added to the code base, with this design...
The problem again is that with the Realtek code, if you host an NFS share (either directly on this adapter, or maybe anywhere) the Proxmox Kernel (either 6.8 or 6.11) will slow down, hang, and then crash.
I spent weeks trying to isolate this, and it is indeed specific to the combination of NFS...
Just as a very important caveat:
I got the above Linux drivers built and working on a Proxmox VE 8 cluster, and everything worked well, except for a huge issue:
If you host an NFS share on a system with one of the USB network adapters, the Linux kernel will get wedged, and eventually hang, and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.