NVIDIA Mellanox NDR 400Gbps Infiniband Announced

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
On the cable side, NVIDIA told us they are able to use copper DACs (likely thick ones) and reach 1.5m.
Do you know if Mellanox will use qsfp dd for NDR infinband? If yes there are already qsfp dd cables (eg from cisco) dac cables up to 3m: Cisco Transceiver Modules - Cisco 400G QSFP-DD Cable and Transceiver Modules Data Sheet

Just for fun: I put a 3m fdr dac cable on the kitchen scale, it weighs about 260 gram. The cisco 3m qsfp dd cable is listed with 600 gram :D
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
FS.com have up to 3m 400g DAC as well. -- 400G QSFP-DD DAC Twinax Cable

$120 for 1m, $200 for 3m.

Things get expensive when a DAC is too short, the cheapest 400g SR8 optic fs.com has is $1000.

With up to 3m DACs, you should be able to wire up, maybe up to 9 racks to one another using DACs alone. (racks 1 - 4 and 6 - 9 connect to an aggregation switch in rack 5).

Beyond that distance and the money for transceivers adds up fast.

That said, I highly doubt high port count 400g switches are cheap either.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I suppose you could also do it like:

agg1 (rack5) -> agg2 (rack 9) -> agg3 (rack 13)

with, say, racks 1 - 5 connecting to agg1, racks 6 - 12 connecting to agg2, and racks 13 - 17 connecting to agg3.

That would probably still let you use DACs exclusively, with a minimal number of hops, for up to 17 racks.