It's simply a matter of one man's gold vs another man's junk.
It all depends on what people are going to be using 100G for, I can only imagine it being the one and only thing in most organised labs either home/research/smb and that is a datapipe from a storage box to an app box etc, and for these such requirements, running a fabric without RDMA and additional functional offloading tech at 100G is seriously taxing on the CPUs, hell, even 40G pushes CPUs.
The point is, when you start going up in bandwidth, you need to consider offloading as much of the work as possible unless you're actually dedicating CPUs for such work, which the average small lab etc doesn't really desire (costs/power etc), and when it comes to a lot of these 'rare' cards, even though there is basic support in linux, not all the offloading features actually work (needs investigation, and even then its still not worth the hassle going forward as the tech was thrown off the cliff a long while back, and any future continued support for the driver is always in question). To rack out the whip on this, take 8 x4 pcie lane nvme drives then slap a 100G crap nic onto it, good luck! This all is the reason why Nvidia bought Mellanox, and that whole article Patrick did on the subject, the future is about abstracting the flows, not consolidating - hopefully one day the CPUs will only do what they were meant to be doing, executing algos for the apps, leave the rest of the workload to the devices outside of the CPU... obviously companies like Intel never really liked this idea back in the day because it means losing more control of the platform ($$$), but even they now are facing the music, market is wiser today and tech is going to leapfrog the limits of the CPU with or without the CPU manufacturers support, though the former is inevitable, especially with the programmes going on out there such as Gen-Z for example.
Anyway, going back to used hardware, I see a gazillion posts where people jump around with tech, "bargain this" "got that", bla bla bla, half the crap doesnt even function as advertised without customised patches in the later kernels, they cook the inside of your server case forcing fans to run louder drawing more power etc and frankly the support is seriously lacking because the demand isn't there. Just because something 'works', doesnt necessarily mean it works how it was intended to work with all its bells and whistles, unless you start knocking on the door of RHEL installations stuck in an older but still supported release for that particular driver version etc.
To avoid going off on a banter tangent, bottom line, if used prices of crap tech stay high, the good tech stay higher, and that has always been the case across the board. There are a great number more 100G nics to hit the market very soon, the doors are opening, 100G switches are already knocking around the 700-900 mark... the reality check here is that MOST people are barely cracking 25G loads, let alone 40G... right now, 100G awards pub-bragging rights, not much else. Many of us have been on 100G for over a year now, for I it was only recently that I even made good use of it, and that's because of a business function, but as far as personal use goes, hell no.