Mellonox MHQH29C-XTR connectx2 10gbe direct connection help

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Proudx

New Member
Sep 1, 2017
3
0
1
46
i purchased 3 of these dual port cards to connect 3 esx servers in a ring. I thought I was purchasing the dual 10gbe sfp version and bought 3 sfp cables to connect the cards by mistake. I only want to run 10gbe mode. Can I just buy 3 QSFP cables connect the cards directly and run 10gbe mode? Does running with a QSFP cable force IB mode?

I read there's an adapter that converts the QSFP to sfp but do I require that to operate in 10gbe mode with the direct connection of the cards?

Should I just return the cards and get the MNPH29C-XTR version? That version seems to cost a bit more on ebay
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Mellanox has (or better had) a cable with qsfp+ (nic) on one end and sfp+ (server or switch) on the other end for these cards.

It depends on the configuration of the cards what protocol they will use. In the default setting it chooses automatically depending on what is on the other side of the cable.
Should I just return the cards and get the MNPH29C-XTR version? That version seems to cost a bit more on ebay
It depends on what you want to achive with these cards. If it should connect the servers one day to a switch/other clients then get ethernet cards. (Infiniband to ethernet gateways are expensive and introduce more complexity)
 

Proudx

New Member
Sep 1, 2017
3
0
1
46
Mellanox has (or better had) a cable with qsfp+ (nic) on one end and sfp+ (server or switch) on the other end for these cards.

It depends on the configuration of the cards what protocol they will use. In the default setting it chooses automatically depending on what is on the other side of the cable.

It depends on what you want to achive with these cards. If it should connect the servers one day to a switch/other clients then get ethernet cards. (Infiniband to ethernet gateways are expensive and introduce more complexity)
Ok. So I could use the direct connect the three esx host with QSFP cables and configure them to run on 10gbe Ethernet?

In the future I do want to get a switch.

couldn't I sImply drop the 40 QSFP to sfp+ adapter on the ports to connect to a 10gbe switch? See article below.

Using a 40GbE (QSFP+) NIC with a 10GbE Switch (SFP+)

Is 10ghe performance the same versus using the native sfp+ connector cards?

I got my dual port QSFP connectx2 cards for 42 btw.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Ok. So I could use the direct connect the three esx host with QSFP cables and configure them to run on 10gbe Ethernet?
Yes, that should be possible.
couldn't I sImply drop the 40 QSFP to sfp+ adapter on the ports to connect to a 10gbe switch? See article below.
I'm not sure if this adapter works with these cards, if you try it report the results :D
Is 10ghe performance the same versus using the native sfp+ connector cards?
I don't expect huge differences with cards from same vendor and same generation.
I got my dual port QSFP connectx2 cards for 42 btw.
Every now and then you can find hp branded connect-x 3 vpi cards (56gbit/s infiniband & 40gbe, dual port) for ~$50 on ebay.
 

Proudx

New Member
Sep 1, 2017
3
0
1
46
The plan is to have fast Iscsi storage and Cif storage for All my esxi host and Vms.

I have three esxi machines in my lab. On one of them I will dedicate most of the resources to running a NAS software Vm and use an array of hard disks passed through to the nas vm as RDMs. Then I would present out an Iscsi datastore and some smb cifs from the nas on it to the other 2 esxi hosts.

Then I could connect the other two esxi hosts to the vm nas to see the Iscsi datastore at 10gbe

As far as the virtual machines being able to see the cifs on the nas without adding a second nic assigned to the 10gbe vswitch in esxi im not sure this would work as they would try to route out the 1gb vswitch. And if I simply assigned the one nic to the 10gbe vswitch I wouldn't have Internet access on the vm. So 2 nics would be needed unless there's a way to put the 10gbe on the same esxi vswitch as the 1gbe adapter.

Perhaps my plan is flawed and I would need a 10gbe sfp switch with 3 or more sfp ports and 3 single 10gbe modules connected to it. This way I only have to have 1 virtual nic and it can get routed to Internet and 10gbe hosts as needed.
 
Last edited: