4x10GbE is not the same as 1x40 GbE unless you run multiple processes (i.e. total bandwith is identical)
(sidenote - well technically it is since 40G is only 4x10G lines aggregated at HW level but in the end you can get higher single process bandwith on 40G than on 4x10G joined at the switch)...
What's the reason not to replace the NICs?
O/c if they are built in its not an option, but if not... Cheap MLX 40G cards can be had all the time
Else - a cheapish Base-T 10G switch with 40G uplink or (depending on how many boxes we're talking about) a couple 10G fibre uplinks...
Hm now you got me thinking...
I am not entirely sure any more if it was an -A model I had before my current one, maybe it was B1 after all and I just remember it incorrectly; its been a while.
So take my statement with a grain of salt
My first thought was nah, that can't be an issue, but when i quickly calculated it it looked different
If you use max 5%, so you basically have 20 times the DWPD if optimization takes place, so 26DWPD in this example.
Thats 26*960GB =24960 GB, divided by 50 = ~499 writes, every 5 secondes gives...
With sync activated your spinner pool will drop significantly in performance.
Its fine to test with the same parameters as you did for your 800MB/s value, just to see the difference.
For non Optane drive its not recommended to split a drive into l2arc and ZIL
Nah not sure, but sometimes things (backplane for example) are also screwed on to the bottom. The screws are reachable from the inside, not from the actual bottom, just hidden behind cables sometimes.
Can't access mine atm, so no clue and can't remember from the last time I replaced a PDB.