Infiniband QDR/FDR 40Gb questions and help

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

xtantaudio

New Member
Apr 8, 2022
11
2
3
Ok, so I have Infiniband in my home lab..... yay ???

First time poster here, but I have tried to read everything I can. I have basic linux experience, but excellent windows/cisco/networking experience. I am looking for some assistance understanding what my next steps should be. This started out as an adventure that now has me almost totally confused.

The details:
3 Lenovo RD640 servers running ESXi 7.03, each with 1 Mellanox CX354A dual port card. (version 2-5, not version 1 and NOT the pro version of the card)
latest firmware flashed on each card using my workbench PC and then installed into each server following the instructions in the post found here:
https://forums.servethehome.com/ind...x-3-ib-ethernet-dual-port-qsfp-adapter.20525/
each card is connected (using one port on each card) to the Mellanox 5022 8 port Infiniband switch using a Mellanox QSFP+ cable
The Mellanox switch port links light up and show green when everything is connected and only after I start the OpenSM subnet manager.

WHATS NEXT ? do I need to have the cards set to passthrough into a specific VM on each server(like it is now), or can VMWare handle the card using the Infiniband protocol directly? Do I switch it to Ethernet and use it that way on the VMWare hosts? thoughts, opinions are welcome.

The next questions are:
1. Only one port on the card is showing in the windows VM on the ESXi 7.0.X Server, is this normal ? I am using passthrough on the ESXI Server to bring the card "into" the windows VM.

2. Do I need a different or specific VMWare or Mellanox provided driver ? Does VMWare support Infiniband ? I seem to find conflicting posts on this issue, not to mention posts from 2013, which dont help at all.

3. I have a breakout cable, QSFP+ to 4 10Gb SFP+, but it doesn't ever seem to work. It did however work on one of the 4 SFP+ ports before the firmware upgrade to the cards. I dont believe that the 5022 8 port switch supports QSFP+ breakout cables, but shouldnt the individual NIC cards support Ethernet 40Gb to 4 10Gb SFP+ ?

4. my ultimate goal is to be able to do data transfer between VM's and/or Vmotion of VM's between physical servers on the 40Gb network. I dont care if it is over Infiniband or Ethernet, I just want to be able to have it work. Obviously if I am talking about using the ethernet protocol, I will have to switch the protocol in the card for ethernet.

5. I would like to find a way to use the 40Gb cards as ethernet and infiniband. one port set to Infiniband and the 2nd to ethernet, so that I can connect the ethernet port to my 10Gb switch to access the rest of the network... is this possible ? (I am guessing that this is only possible if VMWare actually supports Infiniband, which I dont believe that it does anymore)

If you made it this far in my post, THANK YOU ! I look forward to hearing what everyone has to say on this !
 

necr

Active Member
Dec 27, 2017
159
49
28
124
hey xtantaudio,

VMware only had Infiniband working prior to 6.5.
Your 5022 is Infiniband only, meaning you'd have to replace the switch to Ethernet if you want to continue with VMware. You should be able to run Proxmox or Hyper-V cluster on the same HW, but not latest versions of VMware.
Never seen 40G to 10G breakouts work on Infiniband.
 

xtantaudio

New Member
Apr 8, 2022
11
2
3
hey xtantaudio,

VMware only had Infiniband working prior to 6.5.
Your 5022 is Infiniband only, meaning you'd have to replace the switch to Ethernet if you want to continue with VMware. You should be able to run Proxmox or Hyper-V cluster on the same HW, but not latest versions of VMware.
Never seen 40G to 10G breakouts work on Infiniband.
necr,

Thank you for the response. So I have been messing around with all this in my spare time, I agree that I am probably just going to have to go with an ethernet switch and utilize the ports in the Mellanox cards with Ethernet only.

The only reason I was running VMWare was for my cisco call manager, which ONLY (for some f'ed up reason) works in VMWARE. Maybe I will switch over my 2 other servers to Proxmox or Hyper V for testing Infiniband at some point.

I will just keep everything ethernet for now more than likely. Is there a switch that you recommend ? I see all kinds of stuff about the Brocade units......