Any transceiver QSFP 56GbE for LC-LC fiber cables?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MichalPL

Active Member
Feb 10, 2019
189
25
28
56GbE is working fine on Mellanox coper cable - now it's time to bring it into reality

To use it, I need switch (probably it will be Mellanox SX6036), but first I need FDR transceivers for LC-LC fiber cables and.... I found 0 (only for MPO :/)

do you know some models that are more "cost effective" (used one on ebay?) than 100GbE LC-LC ?
 

chicken-of-the-cave

New Member
Mar 13, 2020
18
8
3
56GbE is working fine on Mellanox coper cable - now it's time to bring it into reality

To use it, I need switch (probably it will be Mellanox SX6036), but first I need FDR transceivers for LC-LC fiber cables and.... I found 0 (only for MPO :/)

do you know some models that are more "cost effective" (used one on ebay?) than 100GbE LC-LC ?
I don't believe LC-LC fibre cables can do anything beyond 10G or 25G, as this would be viewed as a "maximum single fibre" connection based on today's capabilities of SFP/QSFP's can do. Anything beyond 10G/25G speeds are achieved through some sort of "bonding" or "concatenated" fibres working together (i.e.: 40GB = 4x10gbps lanes, or 100GB = 4x25gbps lanes, 50GB = 2x25gbps lanes, etc) .. hence, each lane is effectively one fibre strand.
Unlike LC-LC or SC-SC cables, MPO's come in multiple fibre strands per cable, typically are 12 strands per cable if I recall correctly. You may find 8 strand MPO cables (or lower?).

Other than fibre cables, DACs (Direct Access Copper) are the most cost effective solutions. They are much thicker cables compared to fibre, but very cost effective.
 

BeTeP

Well-Known Member
Mar 23, 2019
659
436
63
Mellanox did not make anything like that. So you won't be able to get 56GbE working over a single pair. But if 40GbE is acceptable you can get QSFP-40G-UNIV transceivers for under $50.
 

MichalPL

Active Member
Feb 10, 2019
189
25
28
KAIAM - I have a few one (I don't exactly remember the number 5100 or 4100) and fully confirming they are working fine - but only with speeds 100G or 40G (what is funny working fine with any quality of fibers tested up to 50meters).
And problem with 100G is price of Mellanox ConnectX-4 100GbE cards, I just (week ago) realized that latest models of ConnectX-3 are able to deliver 56GbE after flash to FCBT.

My logic:

Switch 100G/40G 32port is almost for free (Celestica DX010) ~$500.

Mellanox SX6036 56G is for ~$600.

40GbE is slightly too slow, the cheapest working LC-LC transceiver that I was able to buy was Cisco QSFP-40G--SR-BD 8354319005
for about $70.

40/56GbE card is for about $48.

100GbE card is for $250 hmmmm.... expensive :/

only missing part is Transciever who can work with LC-LC cables and 56G (4 "colors" of infrared light, 4x14G)

Logic part 2 (need to connect 15..20 computers, faster network is better):

*40G is cheap and fast, good for NVME PCIe 3.0
*56G is hmm... just found by me that it exist (when reading PDF from Mellanox). I have ~15 cards who support it, and don't have switch (yet, but know model that should buy) and any transceiver. It's also almost perfect for popular NVME PCIe 4.0 disks who achieve very similar speeds (~6800r/5300w while 56G is 6100MB/s).
*100G is expensive and too fast for NVME 4.0 = waste of money?

But maybe I should safe time (and not play with new toys like 56G and Mellanox SX6036 switch in free time) and just use 40G/56G cards as 40G (40G will replace old Mellanox ConnectX-2 10G) and install 100G where it is necessary
 

MichalPL

Active Member
Feb 10, 2019
189
25
28
hmm...
100G hmmm when see $44 per transceiver from your link it's really cheap and maybe not worth to try 56G just go with 100G

and 100G is a supported standard by everyone
 

MichalPL

Active Member
Feb 10, 2019
189
25
28
You may find 8 strand MPO cables (or lower?).
Unfortunately we decided to use LC-LC and there is almost no chance to add new cables :/ I have about 100 LC-LC cables (for 10/40/100G) + 15 Cat6A (for 10G and 2.5G for "WiFi 4.8G") under the floor in the office.

When testing (we are not experts in networks, we are good in C++ and graphics ;) ) MPO was not working good for us (problems with longer cables) and back then barley understood difference between SM and MM, OM4, OM2 etc.. ;) so when testing with 100G over random LC-LC cables goes well we decided to buy more and install it by our self over the weekend ;) (before floor construction will come) - I hope all of them will support 200G in the future.

We are still using MPO in server rack for 1-2 meters distances - because then they are working fine ;) (probably need to understood logic transceiver vs cable).
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
823
113
Experienced Mellanox 56G user here:
As far as I know, there are no LC 56G Transceivers, only a handful of MPO-12 (8 fibers used) transceivers, all manufactured by Finisar.
I've spent quite some time searching, but in the end without success and now using 40G.

Re 100G and LC-LC:
Please note that most 100G LC Transceivers require SMF (Single Mode), MMF is NOT going to work.

Re Kaiam Transceivers:
100G-CWDM4 Transceivers (like the Kaiam) are quite sensitive and when you buy them second-hand, they often arrive broken or half broken.

In case you receive second-hand 100G-CWDM4 transceivers and they actually work, make sure to check the transceiver statistics (like Rx and Tx power) and make sure they're within the limits. There are often broken channels (like channels 1-3 have -3dB TX power, but channel 4 has -13dB TX power, likewise for receive)...

Re Celestica DX010:
IIRC, Celestica DX010 are / may be affected by the Intel CPU bug. It may happen that once you restart the switch, it won't come back up and you need to apply a physical fix (by soldering) to make it work again. That's the reason why they're so cheap.
 
  • Like
Reactions: RageBone

MichalPL

Active Member
Feb 10, 2019
189
25
28
Experienced Mellanox 56G user here:
As far as I know, there are no LC 56G Transceivers, only a handful of MPO-12 (8 fibers used) transceivers, all manufactured by Finisar.
I've spent quite some time searching, but in the end without success and now using 40G.
Wow, than I don't see big chances that I can run 56G over LC-LC :/
good thing is we don't need it right now but really don't see any LC-LC transceiver named 56GbE or FDR 56 too one the market.

So the real competition to 40G is 2x40G and LACP ;) (limited to PCIe 3.0 x8 speed)

Re 100G and LC-LC:
Please note that most 100G LC Transceivers require SMF (Single Mode), MMF is NOT going to work.
This is interesting, what I have done for testing was a mix of OM2 (orange) OM4 (cyan) and SM (yellow) sounds super strange but this is how I tested cables ;)

"Test bench" was something like this:
-2 meters (~7ft) om4 connected to Mellanox ConnectX-4 100G in PC1
-20 meters (~65ft) om2 via keystone "air" connector
-3 meters (~9ft) SM yellow via keystone "air" connector
-50 meters (~~150ft) om3 via keystone "air" connector
-2 meters (~~7ft) om4 connected to Mellanox ConnectX-4 100G in PC2
(real installation is: 1-2meters OM4 in rack ->patch panel -> 15-50 meters OM3 or OM4 (and OM2 for backups) in the floor -> keystone -> 2-3 meters OM4 -> PC)

Bought 8 second-hand random 100G-CWDM4 transceivers, and it's true not all was working fine and 2 not working at all :/, but the test was over the specification limits. If I remember Cisco QSFP-40G--SR-BD (new one this time) was working fine after eliminating segment with SM cable.

but also back then 2 computers used for testing was not fast enough to saturate 100G :/
so 40G goes 4.42GB/s always both directions all the time
but 100G about ~6.5GB/s and about 5GB/s back (two 970 Evo NVME, Win10 software raid, both CPUs was Ryzen 3900X if remember correctly)


In a few weeks we are planning to spend half a day and finally make tests of 100G and 2x 100G, and try to saturate "200G" means PCI-E 3.0 x16.

Testbench #1 (probably):
i7 12700kf @ ~5.2, ConnectX-4 100G connected to PCIE 5.0 slot + 980 Pro (or PM9A1) connected to PCIE 4.0 CPU + 2x 980 Pro connected to chipset PCIE 4.0.

Testbench #2 (if too slow will use another intel 12gen):
Threadripper 2920X or 1950X @ ~4.4, ConnectX-4 100G connected to PCIE 3.0 CPU slot + 8x 970 Evo (or PM981a) connected to PCIE CPU lanes.

Also our server is too slow to saturate "200G"/"PCIE 3.0 x16" but next year is a good time to think about replace it, Only bob C++ builder - HP DL580 G8 have fast enough SSD (about 28GB/s peak tested), but I don't know if CPU is fast enough for ~15GB/s transfers.

In case you receive second-hand 100G-CWDM4 transceivers and they actually work, make sure to check the transceiver statistics (like Rx and Tx power) and make sure they're within the limits. There are often broken channels (like channels 1-3 have -3dB TX power, but channel 4 has -13dB TX power, likewise for receive)...
So what should we do is to test all of them and select them
I remember that I tested commands for Cisco 3064PQ can display it, maybe Celectica and SONiC or Mellanox MFT have similar one?

Re Celestica DX010:
IIRC, Celestica DX010 are / may be affected by the Intel CPU bug. It may happen that once you restart the switch, it won't come back up and you need to apply a physical fix (by soldering) to make it work again. That's the reason why they're so cheap.
Heard about it from Patrick review.
Is the model 2018 still affected, can I check it somehow ?

They will be opened for noise tuning, because too many too noisy hardware in the rack already.

In "free time" friend will make an PWM fan simulator on Arduino or RPi ;)

and 2xCelestica will be converted from 1U to 2U and lot of PC Noctua fans to be 99% silent (along with 2xHP DL580 G8 and 4x EMC M2400)
Two Cisco 3064PQ are ~3-4 years after similar "mod" and still working fine, so additional soldering inside Celestica sound not too crazy if necessary in 2018 models ;)
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
823
113
Bought 8 second-hand random 100G-CWDM4 transceivers, and it's true not all was working fine and 2 not working at all :/, but the test was over the specification limits. If I remember Cisco QSFP-40G--SR-BD (new one this time) was working fine after eliminating segment with SM cable.
Just do a "loopback" test on your Mellanox cards.
Populate both ports with transceivers and then use `etehtool -m` to monitor tx / rx power

"Test bench" was something like this:
-2 meters (~7ft) om4 connected to Mellanox ConnectX-4 100G in PC1
-20 meters (~65ft) om2 via keystone "air" connector
-3 meters (~9ft) SM yellow via keystone "air" connector
-50 meters (~~150ft) om3 via keystone "air" connector
-2 meters (~~7ft) om4 connected to Mellanox ConnectX-4 100G in PC2
(real installation is: 1-2meters OM4 in rack ->patch panel -> 15-50 meters OM3 or OM4 (and OM2 for backups) in the floor -> keystone -> 2-3 meters OM4 -> PC)
I'm surprised that even worked at all.

Cisco QSFP-40G--SR-BD
Yeah, that one is a MMF transceiver

Heard about it from Patrick review.
Is the model 2018 still affected, can I check it somehow ?

They will be opened for noise tuning, because too many too noisy hardware in the rack already.

In "free time" friend will make an PWM fan simulator on Arduino or RPi ;)

and 2xCelestica will be converted from 1U to 2U and lot of PC Noctua fans to be 99% silent (along with 2xHP DL580 G8 and 4x EMC M2400)
Two Cisco 3064PQ are ~3-4 years after similar "mod" and still working fine, so additional soldering inside Celestica sound not too crazy if necessary in 2018 models ;)
You can find some info here: https://forums.servethehome.com/index.php?threads/deprecated-fs-100g-networking-stuff.28388/page-3