infiniband 10G ethernet confusion (it just works???)

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Ok so question for the ib users on the forum. I recently installed 3 mellanox connect x-2 EN adapters 2 in VMware esxi 5.5 hosts and 1 in a server 2012 R2 storage back end. After plugging in the cables and configuring the ip's for the point to point setup everything just worked.

I even plugged the 2 esxi hosts into each other and used the infiniband NICs as vmotion network and again it just worked.


The reason I'm confused is because from what I read at a minimum I thought I would need either an infiniband switch or software to run as a subnet manager but i didn't It all just worked like a normal nic.


Question:

1. Do you not need a subnet manager for 10 gig Ethernet infiniband adapters?

2. I swore I read end to end I finiband in esxi 5.5 wasn't possible yet because the software that ran the subnet manager didn't work on esxi 5.5 (it works on 5.1) is that not true?

3. Is there some limitation to these cards just working as normal NICs without any extra software besides the drivers?


If I knew I could just use these cards without any extra headaches I would have purchased 3 more instead of my expensive 10 g Intel NICs for the storage back end.

Thanks guys
 
Last edited:

britinpdx

Active Member
Feb 8, 2013
355
159
43
Portland OR
Not at all the expert on these matters, but I think that the devil is in the (subtle) details ...

The ConnectX-2 EN range of cards are 10GbE Adapters (in other words Ethernet NICs, just like regular 1Gbe NICs, just 10x faster)

The ConnectX-2 VPI range of cards are "selectable" Adapters. VPI is an acronym for "Virtual Protocol Interconnect". These cards support both Infiniband or 10GbE. When you install the Mellanox driver it allows the port protocol on the card to be selectable between IB or 10GbE.

Infiniband requires a subnet manager, which can either be hardware based (built into a switch), or software based such as OpenSM.

I run my ConnectX-2 VPI cards with the IB protocol under Server 2012, so I run OpenSM. I've never run the cards in GbE mode, so I can't comment on how they work under those circumstances.

So I think that the ConnectX-2 EN cards do not require a Subnet Manager. I could be wrong though, and there are folks here whom are "skilled in the art" and can provide better guidance.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
ethernet does not use subnet manager. you can almost always plug 10gbe direct to server (DAC cables rule!).

Infiniband is a connection base protocol, ethernet a little more complex.

Switch can act as a traffic cop, allowing scaling, buffering, but can also add latency when overloaded ruin the party for everyone :)
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Britinpdx and mrkrad thank you for the awesome explation. That defitnatly helps clear things up so basically to run the infiniband protocol u need the vpi adapter (since it let's u select between Ethernet n infiniband). If you run the EN card (Ethernet only) it works just like a normal nic nothing else required besides cable and IP.

Question: what's the advantage of running in Infiniband mode vs Ethernet mode for the VPI card?

My best guess is the infiniband protocol might have lower latency vs Ethernet but I'm not sure if that's the only advantage.

In the speed category I've been able to saturate the 10G Connectx-2 card to 90% (900MBps) and getting faster is only a limitation of my storage.
 
Last edited:

britinpdx

Active Member
Feb 8, 2013
355
159
43
Portland OR
Question: what's the advantage of running in Infiniband mode vs Ethernet mode for the VPI card?
Infiniband certainly has a lower latency, but can also offer better bandwidth, depending upon the negotiated link. The VPI Silicon can connect at 10,20 or 40 Gb/s. (SDR, DDR and QDR respectively)

Some of the Mellanox cards are limited to 20Gb/s, but I've a hunch that the same base silicon is used but firmware limits the capability.

I started using Infiniband for high speed point to point connection between a few computers, but mostly because relatively new cards can be found rather inexpensively on eBay. You can also find SDR IB switches quite cheap ... couple hundred $. That's "only" a 10Gb/s capable switch (low end by IB standards) but significantly less than you will pay for a 10GbE switch.

QDR IB switches are another story though ... they run closer to $1000
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
Britinpdx and mrkrad thank you for the awesome explation. That defitnatly helps clear things up so basically to run the infiniband protocol u need the vpi adapter (since it let's u select between Ethernet n infiniband). If you run the EN card (Ethernet only) it works just like a normal nic nothing else required besides cable and IP.

Question: what's the advantage of running in Infiniband mode vs Ethernet mode for the VPI card?

My best guess is the infiniband protocol might have lower latency vs Ethernet but I'm not sure if that's the only advantage.

In the speed category I've been able to saturate the 10G Connectx-2 card to 90% (900MBps) and getting faster is only a limitation of my storage.
QDR Infiniband will give you 40Gbits worth of speed, or 32Gbits of IPoIB (IP over Infiniband), along with very low latency.