Intel's x8xx `ice` driver on Linux is also kinda problematic at the moment, it's much less stable than Mellanox's, and they've laid off at least some of their driver people. Other than the 40G thing, though, it's nice enough hardware, and it's happy being used as 4x10 or 4x25, unlike Mellanox (pre ConnectX-8).
Broadcom is far less mature, at least on the timing side -- none of the Broadcom NICs that I've seen handle PTP or really even do a good job at timestamping NTP. I'm not sure how good they are at RDMA, either. I get the impression that they're really just NICs for moving packets quickly, without a lot of offloading/time management/etc.
For (Q)SFP28 and below, it's hard to beat used Mellanox cards. For (Q)SFP56 and higher, they get weird; even though the ConnectX-6 generation is *technically* (Q)SFP56 and the ConnectX-7 is (Q)SFP112, there's a *huge* variation in the speeds that they actually support. For instance, a lot of CX7s support QSFP112, but not 400GbE. The MCX713106AS-CEAT is a 2x QSFP112, but only supports 100 GbE (?!). I don't think they make any QSFP-DD NICs, but they do have OSFP models, so you could use a QSFP(56)-DD to OSFP(56) DAC cable.
However, it's tricky finding *specific* models of any of these on eBay. There aren't a ton of CX7s anyway, and finding one of the few 400G-capable models will probably require a saved search and then careful proofreading of the NIC labels.
Personally, I'm sticking with a mix of 10G, 40G, and 100G (QSFP28) for now, and probably won't go any faster until SFP112 becomes dominant. That'll get 100G in a small form factor; 10GB/sec (or 20G for dual-attached systems) is probably fast enough for my foreseeable needs anyway.
Looking at eBay at least, I don't think anything faster than QSFP28 is really *standard* in any way -- everything is bought for a specific deployment, with NICs, DACS/optics, and switches all bought together. So no one *really* cares about QSFP112 vs QSFP(56)-DD vs OSFP(56), because they're buying a whole datacenter at once, and it just needs to work together. They just buy the cheapest thing that passes their qualification tests and is available in quantity. So there's a lot more variation than we're used to seeing because there's no real drive to standardize on a single interface type at a time. That's one of the two things that's going to make used network gear *so* much fun when this all hits eBay in a few years. The other fun part is the power draw of this generation -- it's going *up*, not down. We've went from 100W to 200W to 500-800W 1U switches, and I don't know that more than a handful of people are *ever* going to want any of the current round of ToR 400G+ switches at home.