Time to switch to 200GbE ? (QSFP56 and/or OSFP vs QSFP-DD)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
I am evaluating options for upgrading to a 200GbE (from 100GbE) some small parts of the network environment and I am interested in understanding the realistic speeds achievable with different combinations of connectors and cables (never seen OSFP connector yet).

Probably QSFP56 is a good connector, or OSFP is better?

Now old used 200GbE QSFP56 Mellanox cards are similar price like 4 years ago Mellanox 100GbE QSFP28 (OSFP 200GbE and 400GbE are more expensive).

And now Switch - I found:

and they have two 200GbE ports (enough for one NAS and one "C++ Builder"), but not QSFP56 but QSFP-DD 200GbE, any chance to connect it at 200GbE with Mellanox ConnectX-6 200GbE QSFP56?

I am guessing at 200GbE it might be tricky because QSFP56 is 4 lanes, QSFP-DD 200GbE is 8 lanes, so 100GbE easy 200GbE not possible or possible?

I am guessing that for example "Mellanox SN3700-VS2FO" will work perfectly 32 x 200GbE QSFP56 Ports, but $9000 for used one is still expensive (especially when max number of connected computers that ever need 200GbE is maybe 7 ;), right now maybe 3 make any sense).

Currently using Mellanox ConnectX-4 100GbE cards and 2x Celestica DX010, also ConnectX-3 40GbE connected to same Celestica @ 40GbE and few of them connected to Mellanox SX6036G switch then working in 56GbE mode instead of 40GbE.


What do you think - any ideas?
is QSFP56 better standard?
or OSFP 200 is better ?
or... even OSFP 400GbE - and than easy to connect QSFP56 cards @ 200GbE, easy to connect other cheap switches QSFP-DD @ 200GbE too and 400GbE for future?

Cards: OSFP or QSFP56, Transceivers (LC-LC) QSFP56 or OSFP ? or wait 3 years for 400 ;)

Best, Michal
 
Last edited:

necr

Active Member
Dec 27, 2017
158
49
28
124
From my experience with QSFP-DD: https://forums.servethehome.com/ind...-already-running-at-200gbe.30563/#post-320310

I'd recommend to stay on QSFP56, it's really backwards compatible and the DACs are cheap. Next one is QSFP112, can't say that OSPF dominates the market - it's still a niche case. Transceivers: for anything small MTP/MPO, for a real installation something LC based with xWDM inside (single-mode).

Same problem as you, not too many links, test servers kinda old (PCIe 3.0/4.0), switches are hella expensive - I'll stay at QSFP28 in general, for some specific tests just connect NICs back-to-back or connect 2 servers to a server with a dual-port card.
 
  • Like
Reactions: Michal_MTTJ

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
Thanks! - Ok just bought first experimental (cheap used parts) QSFP56 setup to test it !

Dual port: I have two ConnectX-5 PCIe 4.0 2x 100GbE and my experience here is: CPUs are not fast enough to do LACP without RDMA.
Windows 10/11 SMB 3.0 + RDMA can achieve on any hardware 11GB/s (including $20 CPU's - E5 16xx v2), but without RDMA even 14900kf is too slow in single core to achieve more than 8-9GB/s realistically on ConnectX-5 (can't use RDMA and LACP at the same time - but maybe I am wrong here ?).

LACP is perfect to connect switches together.

Shortly will test Xeons W3495 and 8558P they have a lot of PCIe 5.0 lanes and I hope similar single core performance to 14700kf after small overclocking ;) so maybe fast enough to saturate 2x 100GbE but I don't think so, almost sure it can saturate 1x 200GbE.

We have skipped MTP/MPO and RJ45 in the office and having only LC-LC (till now good decision for 10GbE/40GbE/100GbE, 56GbE only in rack).
Btw. at home also LC-LC (and minimal RJ45), 40GbE/56GbE cards and 100GbE switch - and ofc. 95% of the time using just WiFi ;)
 

Scott Laird

Active Member
Aug 30, 2014
318
151
43
FWIW, I don't think there's an easy way to connect a QSFP56 to a QSFP-DD at 200 Gbps today. One is 4 lanes of 50 Gbps, the other is 8 lanes of 25 Gbps. That pretty much rules DAC cables out, so you'd need optics for each that implement the same standard. fs.com only has 3 QSFP56 optics (200gbase-SR4, -LR4, and -FR4) and 2 QSFP-DD optics (both 200gbase-2SR4, which is 2x 100gbase-SR4, not 200gbase-SR4).

It looks like 200G LR4 QSFP-DD modules probably exist, but they're >$1k each and rare.
 
  • Like
Reactions: Michal_MTTJ

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
All initial parts received and the first (cheap) experimental 200GbE setup based on QSFP56 is working and looks promising for the future!

We have some fun in the morning "repairing" Mellanox 3700 (from ebay), after mechanical, electronic fix and one electric bypass it's finally working ;) flashed Dell Mellanox ConnectX-6 100GbE card to OEM 200GbE, LC-LC QSFP56 still on the way but received copper 3meter 200GbE cables.

mellanox3700.jpg


Pasted Image_ May 7, 2024 - 6_16_02pm.png
 
Last edited:

jpmomo

Active Member
Aug 12, 2018
547
196
43
qsfp-dd is 400G. specifically 8x50G lanes. it is backwards compatible with qsfp56 (4x50G lanes). they make fanout dac cables for qsfp-dd to 2 x qsfp56.
for the mellanox/nvidia cx7 nics, it is better to go with the 2x200 as they will take the qsfp56 dacs and will also connect to a qsfp-dd switch which is pretty popular (relative to the osfp 400/800g switches.)
the other type of cx7 nic is their osfp. those nics take 4x100G lanes and the size of the port is larger than the qsfp spec. They also require "flat" type connectors which are not that common. Usually osfp sockets take "fin top" connectors.
I would agree with most of everything else in this thread as 100G should be enough for most servers unless you get into the latest and greatest. pci gen 5 is a must for 400G and the latest gen of amd or intel (server) cpu would support that.
 
  • Like
Reactions: Michal_MTTJ

jpmomo

Active Member
Aug 12, 2018
547
196
43
let me know if you want to kick it up to 800G! I know someone that has a few nvidia sn5600 switches that would pair nice with the cx7 nics, using a fanout.
 

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
QSFP-DD - looks interesting but Mellanox/Nvidia don't have them in the cards (or maybe I am wrong), still 400GbE is overkill for now.

What might be interesting is aggregate two 200GbE into one 400GbE - Is it possible on new cards on the HW level ?

We failed before with it on CX4 (2x 100GbE PCIe 3.0) and CX5 (2x 100GbE PCIe 4.0) , can aggregate it via LACP but without RDMA, so too slow to saturate 200GbE.

400GbE can be somehow achieved on CX6 or CX7 cards? (Motherboard is 7x PCIe 5.0 x16)
 

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
let me know if you want to kick it up to 800G! I know someone that has a few nvidia sn5600 switches that would pair nice with the cx7 nics, using a fanout.
800 - no, no thank you ;) It will be super expensive (much more than a really good car) and I can't imagine how to use it properly now

will also see shortly if Core Ultra 200 / Ryzen 9000 motherboards will have good PCIe x8 (and x16 for graphics at the same time) port or not, to be able to use what servers can deliver
 

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
Actually I bought from them (same company) SN3700 200GbE and super happy :) Really good experience, yes came from Israel in 2-3 days.

Now the price is $8000:

I don't know what is missing here and if you can just connect Noctua fans and write 50lines arduino app to simulate PSU and system fans (if needed for SN5600) actually if you can run it in 2-3 days it's worth it ;) (still don't know how to use 64 x 800GbE in the office.. today while all modern workstations with fast single thread is limited to 80Gbps - not enough PCIe lanes).

SN5600 I don't know what is a retail price now? around 80k USD ?
 
Last edited:

jpmomo

Active Member
Aug 12, 2018
547
196
43
I told you that you could get the sn5600 relatively cheap:)

there is a cx7 card that I found that is qsfp-dd. It is a unicorn and won't fit in most servers. it is a TSFF ocp3 nic. The "T" stands for tall and most ocp3 slots only accept the SFF. Interestingly, that nic is a single qsfp-dd 400G port but fans out to 2x200GE. I used a qsfp-dd to 2 x qsfp56 dac.

You can take a 2x200G cx7 and via fw convert into a single 400G. Those are qsfp112 ports and would require either a qsfp112 dac or transceivers.

There is also a lot of work already going into pci 7.0 and 1.6Tbps switches (and other gadgets!) Nvidia will have their cx8 soon and they should support 800GE. they already have 2x400G with their bluefield-3 but those are still pci 5.0.

Hang on tight...everything is going at hyper speed these days!
 

Michal_MTTJ

Member
Apr 12, 2024
30
9
8
I told you that you could get the sn5600 relatively cheap:)
Dedinitelly yes, $15k for working models :)

Do you know any qsfp112 400GbE models worth to buy?

Cx7 - I remember only 3 models support 400G, all of them are single port abd OSFP only - need to douvle chceck
 

jpmomo

Active Member
Aug 12, 2018
547
196
43
Nvidia also makes a single port 400g qsfp112.
You can also purchase the cheaper dual port 100g and use the fw for the 400g