Which ConnectX cards should I get?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MegaMan69

New Member
Jan 21, 2023
3
0
1
So I have got an upgrade itch and Im trying to figure out which ConnectX cards to get. Should I get the ConnectX-3, ConnectX-3 Pro, or get the ConnectX-4?

The only reason I'm looking at 4 is because on the comparison table it says Virtualization | Multi Host | 4 Hosts. So not sure if thats something worth paying more for.

I just want to play around with VMWare stuff, so not sure what cards are supported by what versions.

And not sure about the Connectx-4 speed tiers. Looks like that's when it changed to 25g/50g/100g/, making all the CX3 stuff EOL. The CX4 stuff is still relatively expensive.

So the 4 is probably too expensive to get multiple cards. The Connectx-3 support RoCE v1. The Connectx-3 Pro and newer support RoCE v2 (and v1). So thats why I was gonna go with the Pro.

Then there are all the revisions. Apparently you dont want Rev. A1, but then it looks like they changed at some point to just numbers. So Rev. 01 is newer than Rev A5 or A9?

I guess my questions are

1. Are the Pro cards worth an extra $15 or so? Its not that much more really.

2. Can anyone clarify the revision structure? I'm trying to piece this together from old forum posts and card manufacture dates.

3. Should I wait until I can get Gen 4? Or does that not really matter because you can just pass a vNIC to any VMs?

I've only played with ESXi for a total of a couple days, so I basically know nothing about it.

Thanks
 

mach3.2

Active Member
Feb 7, 2022
130
87
28
ESXi 8 dropped driver support for CX3/CX3 Pro cards.

Future versions of ESXi will likely drop support for CX4 cards once nvidia drop support for CX4 (and it looks like they might do this soon), but don't quote me on this.


If you're doing PCIe passthrough for those cards, then it's not an issue so long your guest VM have driver support for those cards.
 
Last edited:

MegaMan69

New Member
Jan 21, 2023
3
0
1
Yeah, I was looking at the compatibility and couldn't find 8 anywhere. So that sucks. I may just be stuck with slow 10 gig for now.
 

Stephan

Well-Known Member
Apr 21, 2017
930
706
93
Germany
For ESXi to be future proof you really need CX5. Unless you are content running older ESXi versions. But in production that is risky because over time they develop problems with security. ESXi 7 will run two more years, into 2025. Then you pay extra for two more years of dubious support.

For ProxMox, CX3 non-Pro with 56 Gbps "FDR" DAC cables is the most price efficient. Provided you have a Mellanox switch that supports 56 Gbps, otherwise 40 Gbps is the limit. Rev A2 to A5 are ok, avoid everything else. If you want to cross flash, ebay 649281-B21 or Dell 6RKNM or Oracle 7046442.

ROCEv1/v2 imho not worth the hassle. Needs working drivers and OS support. Just use the raw speed and get something Coffee Lake Refresh or later or Cascade Lake or later so CPU can push out the traffic.
 

CyklonDX

Well-Known Member
Nov 8, 2022
845
279
63
i would warn about some xconnect dual 10gig cards on ebay that go really cheap. They are much older - and only make to look like newer model.
 
  • Like
Reactions: MegaMan69

BackupProphet

Well-Known Member
Jul 2, 2014
1,092
650
113
Stavanger, Norway
olavgg.com
ROCEv1/v2 imho not worth the hassle. Needs working drivers and OS support. Just use the raw speed and get something Coffee Lake Refresh or later or Cascade Lake or later so CPU can push out the traffic.
ROCE/RDMA works great in Linux and is no hassle. You don't need to install drivers, it just works after proper configuration.
 

Stephan

Well-Known Member
Apr 21, 2017
930
706
93
Germany
@BackupProphet Not doubting, but he said ESXi. I tried on ESXi 7.0 for an hour, failed. Same story on two Windows Server 2019 bare-metal. Ceph seemed certain not to need or want it. But that's only my inflationarily challenged 2 cents.
 
  • Like
Reactions: MegaMan69

MegaMan69

New Member
Jan 21, 2023
3
0
1
This is just homelab stuff, so not really worried about too much. Thanks for all the info. Now I just have to decide. I don't think I can afford to buy everything if I go with CX5. Need 4 NICs, a switch, a few DACs, and a 65' or so fiber cable to reach my desktop. It was only gonna be like $300 or so for everything with CX3, but with 4 or 5 that's like 1 NIC lol. I could just run ESXi 7. Its not like I'm running bleeding edge hardware and need support. I guess Ill play with it for a bit and see how 10G works with vSAN.

Thanks
 

Stephan

Well-Known Member
Apr 21, 2017
930
706
93
Germany
EMC SX6012 100 bucks, but needs a reflash. 5 QSFP+ DAC cables 2m about 20 a piece = 100 bucks. 50 bucks for a Mellanox Fiber 40/56 Gbps cable with like 20 or 50m length. 4 CX3 from Oracle another 120 bucks, 10 bucks for long slot bezel from Aliexpress. Need reflash, too. $380 all in all. You'd need to be patient though and hunt in the right places and be quick.
 
  • Like
Reactions: MegaMan69

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I you just want to play around with virtualisation, there are a ton of options that aren't VMware. That makes everything much easier and more flexible, and the concepts are all the same anyway.
 
  • Like
Reactions: MegaMan69

metebalci

Member
Dec 27, 2022
51
7
8
Switzerland
I have a few Cx3 Pro Dual Qsfp+ I purchased quite cheap to try 40G. If you use them with the driver in the OS, I guess they are fine. However the nvidia drivers are eol or eol-ish, there is one but not being updated like cx4 and others. Also tools like trex dropped support for cx3. I think for long term use, I would get at least cx4.
 
  • Like
Reactions: MegaMan69