Go on, admit it, who here is already running at 200Gbe??

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dreece

Active Member
Jan 22, 2019
503
160
43
Just wondering who is ahead of the game, doesn't imply you have a switch for it, even just PCIE-4 nic to nic.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
i cant imaging someone running 200GBe at home. i haven't bothered going past some 10gbe connections at home. The higher the nic speeds the higher the watts needed. Most 100GBe cards seem like they run 20 or more watts by themselves.
 
  • Like
Reactions: Marsh and Dreece

Falloutboy

Member
Oct 23, 2011
221
23
18
Just wondering who is ahead of the game, doesn't imply you have a switch for it, even just PCIE-4 nic to nic.
I'm going to be running 56Gb , that should be more than enough for my requirements, I'm just waiting on my new 42RU rack which arrives tomorrow, transport outfit annihalated the first one.
 
  • Like
Reactions: Dreece

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I have 40\56 sitting here ~10 months, still debating if i Should sell it or deploy in the future...
 
  • Like
Reactions: Dreece

Dreece

Active Member
Jan 22, 2019
503
160
43
lol, stuff sitting around for ~10 months sounds about right for a fair few of us I'm guessing.
 
  • Like
Reactions: T_Minus

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Not yet. We are still on 32x 100GbE switches in the lab.

I was talking to Intel about their Tofino2 and future plans. As we move into the 25.6T switch generation power becomes an issue. Some have told me to expect 1.6-2kW for these switches. While some say 25.6T switches do not need co-packaged optics, by the time we get to the 51.2T generation most folks expect we are using co-packaged optics because of the power limitations.

That is a big reason I did https://www.servethehome.com/hands-...packaged-optics-and-silicon-photonics-switch/ earlier this year.
 

nasbdh9

Active Member
Aug 4, 2019
164
96
28
Because qsfp-dd has been used, 200G is not a good option, let us directly use 400G ;)
 

necr

Active Member
Dec 27, 2017
151
47
28
124
Had some issues getting 200G NIC-to-NIC (ConnectX-6).
I've bought a QSFP-DD cable (fs.com) as it was in the same price range as the QSFP56(QSFP50) - 400G, you know, cool stuff, should be backwards compatible to QSFP...

Nopes, didn't fit into a CX2/3/6 cage. And even after fixing the obstacle with pliers didn't manage to find a soft spot for the first row of contacts, I2C didn't come up...so, my advice - stay with QSFP56/50 for now, don't go DD route yet.
IMG_5281.JPG

Will probably write a post when I get a replacement Amphenol QSFP56 (SFF-8665) cable.
 
  • Like
Reactions: T_Minus

747builder

Active Member
Dec 17, 2017
112
58
28
all my switch interconnects are 40G, FreeNAS server to Switch 40G, servers are 10G and backbone is on 10G
 

necr

Active Member
Dec 27, 2017
151
47
28
124
DD plugs are longer than QSFP28
They should be, they're holding 8 lanes. I just can't understand how the QSFP-DD (Molex guys?) say that they are SFF-compatible,

The QSFP-DD/QSFP6 DD800 cage and connector designs with 8 lanes are compatible with the 4 lanes QSFP28/QSFP112. The 7 QSFP-DD800 cage and connector is an incremental design with enhanced signal integrity and thermal which is 8 backwards compatible to 8 lanes QSFP-DD and 4 lanes QSFP28. The QSFP112 cage and connector is an 9 incremental design with enhanced signal integrity and thermal which is backwards compatible to 4 lanes 10 QSFP28/QSFP+
p.10

but then develop their own cage http://www.qsfp-dd.com/wp-content/uploads/2021/05/QSFP-DD-Hardware-Rev6.01.pdf p.84,103 :rolleyes:
 
  • Like
Reactions: klui

klui

Well-Known Member
Feb 3, 2019
824
453
63
Maybe they're talking about type 2 modules on page 80. I never thought about looking at that portion of the connector. I'll have to try connecting the dense end in a QSFP28 and see if the switch recognizes the cable as 100G.
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
I think in this case the new ports are backward compatible with old cables, but new cables are not backward compatible with old ports.

From page 15:
The QSFP-DD/QSFP-DD800 module edge connector consists of a single paddle card with 38 pads on the top
and 38 pads on the bottom of the paddle card for a total of 76 pads. The pads are defined in such a manner to
accommodate insertion of a classic QSFP+/QSFP28/QSFP112 module into a QSFP-DD/QSFP-DD800
receptacles. The classic QSFP+/QSFP28/QSFP112 signal locations are deeper on the paddlecard, so that
classic QSFP+/QSFP28/QSFP112 module pads only connect to the longer row of connector pads, leaving the
short row of connector pads unconnected in a QSFP+/QSFP28/QSFP112 applications.
This means that while looking at the QSFP-DD connector, lanes 1-4 are the pads toward the back, and 5-8 are the pads towards the front. If a four-lane QSFP+/28/112 cable is inserted in a QSFP-DD slot, the front of the connector is missing, but the back is still in the right place to connect to lanes 1-4. On the other hand, a QSFP-DD cable inserted in a QSFP+/28/112 slot will bottom out and connect the front pads, which should be lanes 5-8, to the lane 1-4 pads in the connector. So, while a QSFP-DD/-DD800 port will accept a QSFP+/28/112 cable, properly connecting to the correct half of the lanes, a QSFP+/28/112 port will not be able to properly mate with a QSFP-DD cable.
 
  • Like
Reactions: necr and klui

klui

Well-Known Member
Feb 3, 2019
824
453
63
This makes the most sense.

EDIT: just did this on a 400G switch. Inserted QSFP28 into a DD port and it works.
 
Last edited:

necr

Active Member
Dec 27, 2017
151
47
28
124
200G.png

200G looks really sweet, can't wait to find a pair of used CX7s with OSFP.

I got the Amphenol after QSFP-DD, and man it was another lie. https://cdn.amphenol-icc.com/media/.../datasheet/cableassemblies/hsio_ca_qsfp28.pdf
While reporting rates up to 112Gbit/s per channel and marketing their cables as 100G/200G capable, the actual cable was QSFP28 only - linked at 100G without any issues, but still short of the 200G target. Why'd you mix marketing info with datasheet? Glad that it was an el cheapo Ebay cable.

Then, I've taken a look at Integrators' List Archives - InfiniBand Trade Association for the cables list as I didn't want to pay > 300$ for a copper cable. Found TE Connectivity 2m QSFP56 cable (P/N 4-2333842-4), third time it worked like a charm (see the screenshot above).

Now would be cool to know who's running 400G or 800G at home. :)
 
  • Like
Reactions: T_Minus and nasbdh9

jpmomo

Active Member
Aug 12, 2018
531
192
43
see the other cx-7 threads. next stop cisco 32x800GE in 1U! (just need to find the damn breakout cables! at least until they come out with the cx-8!)
 
  • Like
Reactions: necr

jpmomo

Active Member
Aug 12, 2018
531
192
43
the are a few variants of the cx-7 nics. there are the nics that use an osfp connector (physically larger than the qsfpxxx counterpart) and some that are dual port qsfp112.
the dual port cx-7 nics that are qsfp112 work with a cheap qsfp56 dac if you are just connecting the 2 ports b2b (back to back). Most folks need to connect these nics to a switch and that's where things get a bit more tricky.
I am trying to get more clarification from Patrick and his team on how they were eventually able to connect one of the cx-7 nics (osfp single port 400G) to a qsfp-dd switch.
 

Jason Antes

Active Member
Feb 28, 2020
224
76
28
Twin Cities
I just wound up with about 40 25Gb SFP+'s so I am perusing for a 25Gb switch to go with them. 200 would be fun but severely out of my budget! :)