WTB: 100gb infiniband cards and cable

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
Hi Guys,

Anyone have any links or know where I can buy two 100Gig Infiniband cards plus a cable to direct connect two servers together?

I believe that all 100Gig cards are x16 PCIe, correct?

Best,
Myth
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If you can afford new Mellanox will be happy to sell some;)

And yes
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
I believe that all 100Gig cards are x16 PCIe, correct?
No, there are pcie 4.0 version (ibm uses them on power 8/9 systems) and there are pcie 3.0 cards that use two x8 pcie slots.
Anyone have any links or know where I can buy two 100Gig Infiniband cards plus a cable to direct connect two servers together?
Search for "mellanox (ccat,ecat,edat,hcat,hdat)" on ebay, these are the 100 gbit/s (xcat) and 200 gbit/s cards (xdat) from mellanox.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I thought that splitting function was a cx5 thing. Never found a working pair though on Ebay, always only half (for a price I can get a x16 card for).

But got a cx5 EN once... seller mistake i think ;)
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
Yes I've been searching on Ebay for 100gb infiniband, but havent found anything cost effective. I remember last time I posted about a 40GigE card someone gave me a linke for a dual port for like $35. Just hoping something like that existed for 100gb IB as well.
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
Do you guys think I could bond/team four 40gE ports to get the same performance as a 100GB IB?

We did a bit of testing in the lab, but when we teamed two ports together the speeds dropped...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Depends on your workload type .... parallel streams might work with some optimization, single stream - nope.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Do you guys think I could bond/team four 40gE ports to get the same performance as a 100GB IB?
Ethernet or ib? FDR ~ 56 gbit/s -> 2 fdr ports ~ 112gbit (bottlenecked about~80gbit/s when using dual port x8 card)
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
pcie bandwidth will limit you here. if you bond two fdr ports from different cards maybe, but no way on a single x8.
i tried with vxlan on a leaf/spine on single card and ended up somewhere around 64gbit. dual cards were faster (one port from each card) but also didn't reach 100gbit. What reminds me i should have another look at this 'problem'
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
iperf with a sufficient number of parallel streams.
Setups was quagga + pimd on three PVE nodes acting as leaves within the HV, and 2x SX6012 with ospf enabled on the ports as spines, all /32 Nets. Used global pause for PFC as i had no intention to spend days getting this sorted.
Then i put VXLAN over the fabric/loopback-adresses and measured from endpoints on the hosts and also within/between VM's.

At a certain point the virtio process that serves KVM saturates, guess multiqueues could help (if a single VM really would need that amount of bandwidth).

In the end, not so unhappy about what i got from this poc-setup, tbh, i expected it to be much worse.