Mellanox Connectx-2 VPI Dual Port Adapter - $49.99 OBO

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Are you trying to use these in IB mode or 10GbE mode? I'm lost in the thread so sorry for repeating all older info.

If you are using in IB mode, any QSFP cable on ebay will work. Here are a couple of links
Mellanox Infiniband 1M Cable QSFP to QSFP 40GB s Passive Copper | eBay
QLogic 40GBE Mellanox Infiniband IB External QSFP 3M Cable CBL1 0600328 IBM | eBay
Mellanox MT Opt 002 Passive Copper Cable QSFP to QSFP 2M 6ft | eBay

If you're using in 10GbE mode, the MAM1Q00A-QSA will convert the cable from a QSFP to SFP+. Then you can use any SFP+ cable/connector. I buy my optics through fiberstore.com. I prefer the adapter mode to convert instead of using a QSFP to SFP+ cable since then I can run whatever I want. Additionally, if you are just doing 10GbE, I'd get the ~$32 ConnectX2 10GbE cards instead.

If you are hooking up more than 2 devices, you'll need to figure out switches. IB switches are cheaper (relatively) than 10GbE switches. In IB you will also need to run a subnet manager, but that can be run anywhere in the fabric.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Thanks for your reply Chuck. Sorry another newbie question - what's IB mode? I just want to be able to place each of these cards into my NAS, my 2 Esxi servers so that the data shared between them is as fast as possible. Not sure if I need a switch for this, over and above my current HP 1810 managed switch
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
The cards can support two protocols of communication, Infiniband and Ethernet. IB is often used in high performance computing and within that world, there is a subset called IPoIB or IP over Infiniband. This allows you to use the IP stack and communicate between the nodes with very large packet sizes... The fastest method to communicate really. A little bit more complicated to set up.

The Ethernet side is easier but for these cards limits them to 10Gbps. This is more than fast enough for most use at home, so you won't lose much.

No matter which route you go a switch makes it super easy. Yes you can run them direct cable from each esx host to the server or build a ring but I haven't done that.

There are lots of docs and discussions in the forum and main site on setting this up as well.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Thanks for responding Chuck, and congrats on the new baby arrival!
Are you able to point me to the relatively cheap IB switches? It probably is easier to connect to a switch with 2-3 ESXI servers and NAS
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
Thanks for responding Chuck, and congrats on the new baby arrival!
Are you able to point me to the relatively cheap IB switches? It probably is easier to connect to a switch with 2-3 ESXI servers and NAS
I have a switch for sale, going to follow up on PM
 
  • Like
Reactions: epicurean

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Definitely buy from forum folks if possible, better history and trust ;) But if you search on ebay, search for "infiniband switch" and you'll see all the ones that I would see. Mellanox are the ones that are most frequently used, you can also find some good Sun switches. There's a thread on here about them.
 

ultradense

Member
Feb 2, 2015
61
11
8
41
For connectx-2: Look for Sun x2821a-z on ebay. I've got 2 for 299 each. They are LOUD but very good and have a built in subnet manager.
 
  • Like
Reactions: Patrick

moto316

Member
Feb 23, 2014
62
24
8
got 3 of these working great for vmotion/vsan traffic with a mellanox 2036 switch in esxi 6.0 fyi
 

Chris Audi

New Member
Jun 2, 2015
11
1
3
51

Marsh

Moderator
May 12, 2013
2,645
1,496
113
The cable should work with the card.
I brought over 40 cables from the same seller in various length. They all worked ok.

Cable has the Mellanox logo , whatever that mean.
 

Chris Audi

New Member
Jun 2, 2015
11
1
3
51
The cable should work with the card.
I brought over 40 cables from the same seller in various length. They all worked ok.

Cable has the Mellanox logo , whatever that mean.
would that cable allow you to do 40GB via IB ? does the card need to reflash to work with ESXi ?
 

Chris Audi

New Member
Jun 2, 2015
11
1
3
51
Using Windows 2012R2 as my host to flash the HCA-30024 card.
Install the HCA-30024 card, without any additional software, Windows 2012R2 should recognized the card.

Download firmware link Firmware for ConnectX®-2 IB/VPI - Mellanox Technologies
Step 1: Install WinMFT_x64_3_5_0_16.exe
Step 2: run this batch file to change PSID to MT_0D81120009 , same PSID as MHQH29B
mst status
echo Look for mt26428_pci_cr0
pause
call flint -allow_psid_change -d mt26428_pci_cr0 -i fw-ConnectX2-rel-2_9_1000-MHQH29B_A3-A5.bin burn
ibstat
echo Firmware version status
pause
Step 3: Reboot ( not sure it is necessary )
Step 4: MLNX_VPI_WinOF-4_80_All_win2012R2_x64 ( This is the version I used , there is a newer 4_90 version now ),
During Installation , Installer should upgrade card with the latest firmware, if it recognized the card.

The latest version of firmware that I used is FW Version: 2.10.720 , Windows 2012R2 claim it is RDMA capable.
I am still learning all this but tell me why do we need to do all this? is this going to update to latest firmware or just different type of firmware ? I got my card today and I was able to set both port at 40000MB. no update or any kind, I plug it in the ESXi host 6.0 it boot up and recognized the card..

I am still waitting got the cable

upload_2015-6-22_19-52-57.png
 

lunarsunrise

New Member
Jun 23, 2015
3
0
1
56
I am still learning all this but tell me why do we need to do all this? is this going to update to latest firmware or just different type of firmware ? I got my card today and I was able to set both port at 40000MB. no update or any kind, I plug it in the ESXi host 6.0 it boot up and recognized the card..

I am still waitting got the cable

View attachment 590
Take a look at this article: http://www.servethehome.com/custom-firmware-mellanox-oem-infiniband-rdma-windows-server-2012/

(This isn't a Windows-specific problem, by the way.) The cards are not capable of RDMA and certain types of offloads without firmware 2.9.8350 or later. At 40 GT/s (or even 10 GT/s), networking can become CPU-bound very quickly.

--

You don't necessarily need a switch. You need something that can run a subnet manager, and you need to choose a topology that avoids credit loops. Just as a simple example, I have a small cluster that consists of six compute nodes and a storage node. The compute nodes use one two-port adapter to form a ring and then two of them, opposite each other in the ring, have a second two-port adapter; those two are connected to each other and to the storage node.

A few pages into this PDF, Mellanox shows diagrams of lots of possible Infiniband topologies (and talks about issues like credit loops), if you're curious: http://www.mellanox.com/related-doc...ellanox-infiniband-interconnect-solutions.pdf

--

Just as a side note, one extra advantage of staying in Infiniband mode and using IPoIB (instead of switching the adapter to Ethernet mode) is that it doesn't prevent you from using RDMA. Right now, your lab is almost certainly set up to exclusively use IP; and some of your software may not support other protocols; fine. But if/when you want to experiment, you can do that incrementally, which is nice.
 

Chris Audi

New Member
Jun 2, 2015
11
1
3
51
Take a look at this article: http://www.servethehome.com/custom-firmware-mellanox-oem-infiniband-rdma-windows-server-2012/

(This isn't a Windows-specific problem, by the way.) The cards are not capable of RDMA and certain types of offloads without firmware 2.9.8350 or later. At 40 GT/s (or even 10 GT/s), networking can become CPU-bound very quickly.

--

You don't necessarily need a switch. You need something that can run a subnet manager, and you need to choose a topology that avoids credit loops. Just as a simple example, I have a small cluster that consists of six compute nodes and a storage node. The compute nodes use one two-port adapter to form a ring and then two of them, opposite each other in the ring, have a second two-port adapter; those two are connected to each other and to the storage node.

A few pages into this PDF, Mellanox shows diagrams of lots of possible Infiniband topologies (and talks about issues like credit loops), if you're curious: http://www.mellanox.com/related-doc...ellanox-infiniband-interconnect-solutions.pdf

--

Just as a side note, one extra advantage of staying in Infiniband mode and using IPoIB (instead of switching the adapter to Ethernet mode) is that it doesn't prevent you from using RDMA. Right now, your lab is almost certainly set up to exclusively use IP; and some of your software may not support other protocols; fine. But if/when you want to experiment, you can do that incrementally, which is nice.
How do I stay in IPoIB? need switch ? I had try to follow this InfiniBand install & config for vSphere 5.5 | Erik Bussink to install
OpenSM, everything seem to be install, but when I ran step

Setting MTU and Configuring OpenSM

After the reboot we have two more commands to pass.

  • esxcli system module paramters set -m=mlx4_core -p=mtu_4k=1
it tell me IB is not running and no "partitions.conf" I am not sure if it have anything to do with me on ESXi 6.0 but those instruction is for 5.5 seem not working for me. so I have to get the switch too ?

on the side note: I order 3 more HCA 30024 700EX2 Q from another vendor and they sent me something like
is that better then HCA 30024 700EX2 Q and should I update the firm ware to 2.10, I can't find 2.10 firmware at Mellanox for this card only 2.9 firmware
 

epicurean

Active Member
Sep 29, 2014
785
80
28
If someone can direct me to a noobie friendly guide for the entire proper setup of these cards in an ESXI environment,would be much appreciated.
I have a card in an xpenology NAS, and 2 ESXI servers, not sure where to start because it all looks very technical

PS - I do have an incoming Sun switch, but that thing is really loud! Would prefer an option that will not require the switch as well