Ok, so I have Infiniband in my home lab..... yay ???
First time poster here, but I have tried to read everything I can. I have basic linux experience, but excellent windows/cisco/networking experience. I am looking for some assistance understanding what my next steps should be. This started out as an adventure that now has me almost totally confused.
The details:
3 Lenovo RD640 servers running ESXi 7.03, each with 1 Mellanox CX354A dual port card. (version 2-5, not version 1 and NOT the pro version of the card)
latest firmware flashed on each card using my workbench PC and then installed into each server following the instructions in the post found here:
https://forums.servethehome.com/ind...x-3-ib-ethernet-dual-port-qsfp-adapter.20525/
each card is connected (using one port on each card) to the Mellanox 5022 8 port Infiniband switch using a Mellanox QSFP+ cable
The Mellanox switch port links light up and show green when everything is connected and only after I start the OpenSM subnet manager.
WHATS NEXT ? do I need to have the cards set to passthrough into a specific VM on each server(like it is now), or can VMWare handle the card using the Infiniband protocol directly? Do I switch it to Ethernet and use it that way on the VMWare hosts? thoughts, opinions are welcome.
The next questions are:
1. Only one port on the card is showing in the windows VM on the ESXi 7.0.X Server, is this normal ? I am using passthrough on the ESXI Server to bring the card "into" the windows VM.
2. Do I need a different or specific VMWare or Mellanox provided driver ? Does VMWare support Infiniband ? I seem to find conflicting posts on this issue, not to mention posts from 2013, which dont help at all.
3. I have a breakout cable, QSFP+ to 4 10Gb SFP+, but it doesn't ever seem to work. It did however work on one of the 4 SFP+ ports before the firmware upgrade to the cards. I dont believe that the 5022 8 port switch supports QSFP+ breakout cables, but shouldnt the individual NIC cards support Ethernet 40Gb to 4 10Gb SFP+ ?
4. my ultimate goal is to be able to do data transfer between VM's and/or Vmotion of VM's between physical servers on the 40Gb network. I dont care if it is over Infiniband or Ethernet, I just want to be able to have it work. Obviously if I am talking about using the ethernet protocol, I will have to switch the protocol in the card for ethernet.
5. I would like to find a way to use the 40Gb cards as ethernet and infiniband. one port set to Infiniband and the 2nd to ethernet, so that I can connect the ethernet port to my 10Gb switch to access the rest of the network... is this possible ? (I am guessing that this is only possible if VMWare actually supports Infiniband, which I dont believe that it does anymore)
If you made it this far in my post, THANK YOU ! I look forward to hearing what everyone has to say on this !
First time poster here, but I have tried to read everything I can. I have basic linux experience, but excellent windows/cisco/networking experience. I am looking for some assistance understanding what my next steps should be. This started out as an adventure that now has me almost totally confused.
The details:
3 Lenovo RD640 servers running ESXi 7.03, each with 1 Mellanox CX354A dual port card. (version 2-5, not version 1 and NOT the pro version of the card)
latest firmware flashed on each card using my workbench PC and then installed into each server following the instructions in the post found here:
https://forums.servethehome.com/ind...x-3-ib-ethernet-dual-port-qsfp-adapter.20525/
each card is connected (using one port on each card) to the Mellanox 5022 8 port Infiniband switch using a Mellanox QSFP+ cable
The Mellanox switch port links light up and show green when everything is connected and only after I start the OpenSM subnet manager.
WHATS NEXT ? do I need to have the cards set to passthrough into a specific VM on each server(like it is now), or can VMWare handle the card using the Infiniband protocol directly? Do I switch it to Ethernet and use it that way on the VMWare hosts? thoughts, opinions are welcome.
The next questions are:
1. Only one port on the card is showing in the windows VM on the ESXi 7.0.X Server, is this normal ? I am using passthrough on the ESXI Server to bring the card "into" the windows VM.
2. Do I need a different or specific VMWare or Mellanox provided driver ? Does VMWare support Infiniband ? I seem to find conflicting posts on this issue, not to mention posts from 2013, which dont help at all.
3. I have a breakout cable, QSFP+ to 4 10Gb SFP+, but it doesn't ever seem to work. It did however work on one of the 4 SFP+ ports before the firmware upgrade to the cards. I dont believe that the 5022 8 port switch supports QSFP+ breakout cables, but shouldnt the individual NIC cards support Ethernet 40Gb to 4 10Gb SFP+ ?
4. my ultimate goal is to be able to do data transfer between VM's and/or Vmotion of VM's between physical servers on the 40Gb network. I dont care if it is over Infiniband or Ethernet, I just want to be able to have it work. Obviously if I am talking about using the ethernet protocol, I will have to switch the protocol in the card for ethernet.
5. I would like to find a way to use the 40Gb cards as ethernet and infiniband. one port set to Infiniband and the 2nd to ethernet, so that I can connect the ethernet port to my 10Gb switch to access the rest of the network... is this possible ? (I am guessing that this is only possible if VMWare actually supports Infiniband, which I dont believe that it does anymore)
If you made it this far in my post, THANK YOU ! I look forward to hearing what everyone has to say on this !