Experienced Mellanox 56G user here:
As far as I know, there are no LC 56G Transceivers, only a handful of MPO-12 (8 fibers used) transceivers, all manufactured by Finisar.
I've spent quite some time searching, but in the end without success and now using 40G.
Wow, than I don't see big chances that I can run 56G over LC-LC :/
good thing is we don't need it right now but really don't see any LC-LC transceiver named 56GbE or FDR 56 too one the market.
So the real competition to 40G is 2x40G and LACP
(limited to PCIe 3.0 x8 speed)
Re 100G and LC-LC:
Please note that most 100G LC Transceivers require SMF (Single Mode), MMF is NOT going to work.
This is interesting, what I have done for testing was a mix of OM2 (orange) OM4 (cyan) and SM (yellow) sounds super strange but this is how I tested cables
"Test bench" was something like this:
-2 meters (~7ft) om4 connected to Mellanox ConnectX-4 100G in PC1
-20 meters (~65ft) om2 via keystone "air" connector
-3 meters (~9ft) SM yellow via keystone "air" connector
-50 meters (~~150ft) om3 via keystone "air" connector
-2 meters (~~7ft) om4 connected to Mellanox ConnectX-4 100G in PC2
(real installation is: 1-2meters OM4 in rack ->patch panel -> 15-50 meters OM3 or OM4 (and OM2 for backups) in the floor -> keystone -> 2-3 meters OM4 -> PC)
Bought 8 second-hand random 100G-CWDM4 transceivers, and it's true not all was working fine and 2 not working at all :/, but the test was over the specification limits. If I remember Cisco QSFP-40G--SR-BD (new one this time) was working fine after eliminating segment with SM cable.
but also back then 2 computers used for testing was not fast enough to saturate 100G :/
so 40G goes 4.42GB/s always both directions all the time
but 100G about ~6.5GB/s and about 5GB/s back (two 970 Evo NVME, Win10 software raid, both CPUs was Ryzen 3900X if remember correctly)
In a few weeks we are planning to spend half a day and finally make tests of 100G and 2x 100G, and try to saturate "200G" means PCI-E 3.0 x16.
Testbench #1 (probably):
i7 12700kf @ ~5.2, ConnectX-4 100G connected to PCIE 5.0 slot + 980 Pro (or PM9A1) connected to PCIE 4.0 CPU + 2x 980 Pro connected to chipset PCIE 4.0.
Testbench #2 (if too slow will use another intel 12gen):
Threadripper 2920X or 1950X @ ~4.4, ConnectX-4 100G connected to PCIE 3.0 CPU slot + 8x 970 Evo (or PM981a) connected to PCIE CPU lanes.
Also our server is too slow to saturate "200G"/"PCIE 3.0 x16" but next year is a good time to think about replace it, Only bob C++ builder - HP DL580 G8 have fast enough SSD (about 28GB/s peak tested), but I don't know if CPU is fast enough for ~15GB/s transfers.
In case you receive second-hand 100G-CWDM4 transceivers and they actually work, make sure to check the transceiver statistics (like Rx and Tx power) and make sure they're within the limits. There are often broken channels (like channels 1-3 have -3dB TX power, but channel 4 has -13dB TX power, likewise for receive)...
So what should we do is to test all of them and select them
I remember that I tested commands for Cisco 3064PQ can display it, maybe Celectica and SONiC or Mellanox MFT have similar one?
Re Celestica DX010:
IIRC, Celestica DX010 are / may be affected by the Intel CPU bug. It may happen that once you restart the switch, it won't come back up and you need to apply a physical fix (by soldering) to make it work again. That's the reason why they're so cheap.
Heard about it from Patrick review.
Is the model 2018 still affected, can I check it somehow ?
They will be opened for noise tuning, because too many too noisy hardware in the rack already.
In "free time" friend will make an PWM fan simulator on Arduino or RPi
and 2xCelestica will be converted from 1U to 2U and lot of PC Noctua fans to be 99% silent (along with 2xHP DL580 G8 and 4x EMC M2400)
Two Cisco 3064PQ are ~3-4 years after similar "mod" and still working fine, so additional soldering inside Celestica sound not too crazy if necessary in 2018 models