Very happy to see more detailed networking coverage!
One thing I would suggest is looking at the tests which vendors find important. Some of this is mostly for dialing in DPDK, but other parts like RFC2544 are staple tests that vendors want since it's a standard setup that can be used to compare hardware easily. In particular, some NICs can have a noticeable perf hit when under heavy load and using SR-IOV or vDPA and having public data about that would be valuable.
Also, just be warned that iPerf will start to run into issues at higher speeds. We probably only have 5-7 more years of being able to push single-flow TCP at line rate before the window size limitation renders it impossible. UDP mode will help with that but you'll need to have tests set up for that and you'll want to be testing that along the way so you know when you've run into network stack limitations.
It would also be neat if you could capture what offloads Linux says the NIC offers as far as Ethernet is concerned. That data isn't really available anywhere and stuff that "just works" with the kernel network stack and any application is very valuable.
A few other tests I want to see:
* High rate small packets vs hardware offloads. Particularly relevant are stateful offloads like flow rate limiting, protocol (TCP, QUIC) offloading/acceleration, stateful firewall offloads, TLS/macSEC/IPSec, etc
* Line rate multicast small ipv6 packets. I have seen many a switch, NIC, and appliance crash and burn under traffic that some databases will actually generate if on an IPv6 network and under heavy load.
* Windows vs Linux. I don't think some people understand just how much slower the Windows network stack is at some tasks. For those of you who remember, running Linux on tiny ARM cores and doing all of the network processing there was faster than doing it on fairly high-end consumer hardware when Barefoot Networks released their proto-DPU. The situation has not improved as much as it should, which is also why Windows leans so heavily on RDMA.
*
P2P DMA is not really tested anywhere public outside of microbenchmarks.
* Distributed databases, especially distributed SQL such as CockroachDB or YugabyteDB. Distributed joins are both very high bandwidth and latency sensitive so the features provided by a NIC matter a lot.
* Test whether cryptographic offloads can actually do line rate, same for compression offloads if the NIC has them + storage offloads.
* Testing the latency of getting a packet in and out of the system using DPDK testpmd's io forwarding functionality (for very low overhead) would be good since NICs have wildly varying latency characteristics.
* For anything with tc-flower or OVS offloads, shuffling data between containers/vms on the same host is an important but poorly-covered workload. Some NICs will do far in excess of their total port bandwidth for this (esp. Mellanox/Nvidia with SR-IOV), while others can't even do their total port bandwidth.