OK, I can now exclude a hardware issue after receiving a replacement adapter and both show the same problem as described above.
However, when further investigating I discovered a real show-stopper for this adapter, at least with the current Windows 10 driver.
When I received the replacement adapter I initially connected it peer to peer with an Asus XG-C100C network adapter in my desktop PC and left the driver parameter settings on their default values, i.e. no jumbo frames and no buffer tweaking. In receiving mode the adapter showed about 3.2 Gb/s which didn't surprise me based on my earlier tests and the fact that the settings were not optimized. What on the other hand did surprise me was that I did get about 2.1 Gb/s when the adapter was in sending mode. This throughput was even stable and did not vary as much as in my earlier tests.
After enabling jumbo frames and optimization of the ring buffer settings I was able to bump this up to the 3.45 Gb/s in receiving mode and about 3 Gb/s in sending mode. So what was wrong with my earlier tests ?
The explanation turned up short after I enabled "Priority & VLAN" on both adapters. With VLAN enabled and a VLAN ID other than "0" the sending performance did exactly show the same picture as in my initial tests, i.e. very unstable with low values in the Mb/s range and maximum values of about 1.9 Gb/s and an average of about 1.2 Gb/s. The receiving performance was not impacted.
To assure that I hadn't screwed up something with my tweaked parameter settings I repeated the test with the plain vanilla driver settings and it delivered a similar result. As long as no VLAN ID was set the performance was OK and configuring VLAN with an ID other than "0" brought the sending performance down.
But that's not the end to the story. What's even worse is that no matter what VLAN ID you set on the Sabrent adapter, it doesn't matter !
You can still ping between the to computers even when you set them to different VLAN IDs.

So the Sabrent adapter doesn't honor the VLAN ID setting !
Well that's not exactly true, it honors it with a lower performance.
I did open a ticket with Sabrent and it will be interesting to see how that moves on.
Again I would be very interested to understand what tests were performed at STH on this adapter ?
Since this looks to me very much like a driver issue I would also expect that the StarTech adapter, which also uses the Marvell driver, shows the same behaviour.