I've done a lot of experimenting with fast networking over the years, and I have come to the conclusion that it rarely works the way you'd like.
With NVMe drives, PCIe, RAM and CPU being very fast these days, you'd expect the network interface to be the bottleneck, but that is rarely the case.
The thing is, I couldn't tell you what actually is the bottleneck either.
Connections faster than 10Gbit (25Gbit, 40Gbit, 100Gbit, etc.) usually work pretty well if you saturate them with large numbers of requests from many clients (like one would in a server application) but single connection results (like transferring a file from a client on an otherwise idle connection) seem to top out somewhere between 1.2 and 1.8 GB/s no matter what you do, and even getting that level of performance can be tricky sometimes.
These speeds are obviously far below the available PCIe bandwidth, RAM bandwidth, local NVMe drive bandwidth, and looking at CPU load, it is rarely pinning a single core or anything like that during file transfer operations, but something is holding it back.
The thing is, the same was true with 10Gbit not that many years ago. Impossible to max out with a single connection, for no apparent reason.
I get the impression it lies in software. Some strange behavior resultant from thread locks, wait states or something like that. As newer faster network interfaces become more mainstream in clients (like 10Gbit now has) these weird software inefficiencies quietly get cleaned up by developers and we can take advantage of the full bandwidth or close to the full bandwidth with a single connection, but if you do anything even slightly more exotic than that, you can't. At least that is my impression.
I've been using 10Gbit networking at home since 2014 when this was highly exotic, and required buying decomissioned fiber adapters from server pulls on eBay. In the beginning the experience was pretty so-so, but over time it got better to the point where I for the last few years have had no problem maxing them out, and transferring a file from my local NVMe drive to my NAS at up to 1.2GB/s.
Encouraged by this I thought it might be time for an upgrade. When I saw some used Intel 40Gbit QSFP+ adapters pop up on eBay at a price too good to pass up, I went for it.
At 40Gbit I see a small improvement (I can get to 1.6 - 1.8GB/s and rarely up to 2GB/s depending on what I am transferring) but that's about as high it gets. Most of the time I don't even get that.
This seems to be a pretty universal experience for anyone who tries exotic fast networking designed for servers on clients. At least for me it hasnt mattered if it is in Linux or Windows. I guess we just have to wait for faster networking to become available in the mainstream before we see real single link performance improvements. Until then, exotic fast networking products only make sense in servers that see large numbers of concurrent connections. Which - after all - is what they were designed for, so it kind of makes sense.