HP 649281-B21 ConnectX-3 VPI MCX354A-FCBT $25 shipped Ebay

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Any confirm if this work with ESXi 6.7U1 (and possibly pass through to FreeNAS - I know Chelsio is the FreeNAS "preferred" card)? I don't see it on the HCL.
Yes CX3 work in ESX, never passed through to FN.
Not sure it has actual advantages over virtualized NIC managed by ESX
Bought a ICX 6450-24 today and planned on using the integrated X540-AT2 + Mikrotik S+RJ10, but think 2 of these (1 per ESXi Host) + 2 x DAC may be a better play.
Told you so ;)

But I am not clear what you want to do there. The 6450-24 has SFP+, so you need QSFP to SFP+ converter like
Mellanox MAM1Q00A-QSA Ethernet Cable Adapter 40Gb/s to 10Gb/s QSFP to SFP+
with that you can attach the card to the switch.
You can do a direct box2box connection with the second port though (with QSFP cable).
But witch CX3 you can't pass through half of the card to FN, nor can you pass a SRIOV adapter to FN.

So in your case it would be good to leave the card in ESX, create two (d)vSwitches, one for direct connection (10.10.5.x) and one for the regular ESX traffic (10.10.1.x).
All ESX services that require the fast link (nfs/iScsi) will be mounted on .5
For all other communication you use the .1 net
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
any major advantages for this card to my older ConnectX-2 VPI?
I think RDMA v1 might not have been on CX2 but am not sure.
56GB/s (FDR) if you have direct connect or capable switch.
ESX driver support from MLX might be CX3 up only

But all in all unless you are missing something or need to expand I don't think its worth the upgrade. They come in waves and its unlikely that those will end too soon, so you always can get some in the next run
 
  • Like
Reactions: aij

e97

Active Member
Jun 3, 2015
323
193
43
Got mine today.

Rev A2

Well packaged and excellent condition. Low profile bracket.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
@arglebargle Wanna get some for the t730/DT122BEs?
I'm a few weeks ahead on this one, I picked up a couple of these around $30/ea and a couple Sun branded cards for $27-28 for my other machines. Thanks for the heads up though!
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I'm a few weeks ahead on this one, I picked up a couple of these around $30/ea and a couple Sun branded cards for $27-28 for my other machines. Thanks for the heads up though!
Let me know how t730 copes with them, and whether they work better in SRIOV-land...I still didn't have a chance to install those ConnectX2 IB/40GbE cards I got a while ago.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Let me know how t730 copes with them, and whether they work better in SRIOV-land...I still didn't have a chance to install those ConnectX2 IB/40GbE cards I got a while ago.
They work great with the T730. SR-IOV is officially supported and works fine; I think I set the system fan a notch or two above half to keep the NIC within the operating temp range.

One note on operating temp: the official upper limit is 45C IIRC, but I had one chugging away in my mini ITX shoebox NAS at around 70C for quite a while before I noticed how hot it was running and added a fan to it.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Yes CX3 work in ESX, never passed through to FN.
Not sure it has actual advantages over virtualized NIC managed by ESX
  • I wouldn't think using DirectPath I/O would have advantages either (but I'm not the pro here - that is you ;)), just trying to make sure I perform appropriate diligence to avoid any "gotchas" later.
Told you so ;)
  • Refresh my memory, please? :)
  • (I don't recall, but I'm sure you were right, as always, o/c you are the man, and more than helpful!)
But I am not clear what you want to do there. The 6450-24 has SFP+, so you need QSFP to SFP+ converter like
Mellanox MAM1Q00A-QSA Ethernet Cable Adapter 40Gb/s to 10Gb/s QSFP to SFP+
with that you can attach the card to the switch.
You can do a direct box2box connection with the second port though (with QSFP cable).
But witch CX3 you can't pass through half of the card to FN, nor can you pass a SRIOV adapter to FN.
  • Objective = Cost Effective 10 Gb ... while I have integrated 2 x X540-AT2 per server (and was planning on connecting using MikroTik S+RJ10) another helpful member pointed out that it would be more cost effective + performant to go with go an Mellanox ConnectX-2 + DAC.
  • I missed the QSFP bit (obviously I need SFP+) ... stupid (yet crucial) oversight.
So in your case it would be good to leave the card in ESX, create two (d)vSwitches, one for direct connection (10.10.5.x) and one for the regular ESX traffic (10.10.1.x).
All ESX services that require the fast link (nfs/iScsi) will be mounted on .5
For all other communication you use the .1 net
  • By direct connection you mean?
  • [if that is a reference to prior discussion, no need (unless I'm missing something) for a "switch less" connection with a proper switch / adapters in place (soon).
  • Don't disagree with subnetting o/c.
[sidebar]
  • Since you didn't tell me up front I needed a 3rd ESXi host (joking), but that being on my road map, and also building a new workstation (which I leave on 24/7 anyway), would it make sense to "dual-purpose" that build to use as a 3rd host, but only to serve as a witness?
  • The only downside I can think of is that I'd take a performance hit vs. baremetal.
(appreciate your knowledge as always - thanks)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Virtualized vs passed through/SRIOV'ed is always a tradeoff between needed features and flexibility.
The more abstract (virtualized) you become the more performance it costs (low percentage most likely nowadays) the less hardware specific features are available to the client OS (think assigning different tc/tx buffers, package sizes etc)

There is no answer that fits all; most will fare fine with the easy approach and sometime you want to have specific functionality.
In your case I'd suggest to go the simple way first to establish functionality and then experiment with a fall back plan.

Re 'Told you so' - I told you that you you will end up with a switch sooner or later;)

Re QSFP - There are also SFP+ 10G Cards from MLX, but why if you can get 40G for free (or the cost of an adapter)



By direct connection you mean?
  • [if that is a reference to prior discussion, no need (unless I'm missing something) for a "switch less" connection with a proper switch / adapters in place (soon).
  • Don't disagree with subnetting o/c.
[sidebar]
  • Since you didn't tell me up front I needed a 3rd ESXi host (joking), but that being on my road map, and also building a new workstation (which I leave on 24/7 anyway), would it make sense to "dual-purpose" that build to use as a 3rd host, but only to serve as a witness?
  • The only downside I can think of is that I'd take a performance hit vs. baremetal.
Direct connection would be at 40GBe if your pools can manage that, else no need o/c :)

I didn't mention third host? But most likely I mentioned the advantages of four hosts for vSan didnt I;)

Workstation as witness... o/c that will work. However, if you have a beefy box that is always on why not use it for a third host in the first place? You'll need to run ESX anyway for the witness vm. Although now that I type this, I am not sure. Have never tried running that on Player/Workstation, might actually work.
O/C a third vsan box would need the same storage HW.
 
Last edited:

svtkobra7

Active Member
Jan 2, 2017
362
87
28
In your case I'd suggest to go the simple way first to establish functionality and then experiment with a fall back plan.
  • Did I miss the simple proposal? (no sarcasm)
Re 'Told you so' - I told you that you you will end up with a switch sooner or later;)
  • You know what you are talking about, I will give you that. I was just looking for an "interim" solution previously then stumbled on the Brocade + SFP+ 10G-BASE-T modules.
  • And here you are recommending Netgear switches to me when I can instead have a slightly used Ferrari. ;)
Re QSFP - There are also SFP+ 10G Cards from MLX, but why if you can get 40G for free (or the cost of an adapter)
  • I think we are on the same page here.
Direct connection would be at 40GBe if your pools can manage that, else no need o/c :)
  • Seems like a good reason to buy more Optane (if that is still on the rand cl). ;)
I didn't mention third host? But most likely I mentioned the advantages of four hosts for vSan didnt I;)
  • No you did mention #3 o/c (after I bought #2).
  • My fiance will be in touch regarding 2019 IT CapEx planning (just kidding).
Workstation as witness... o/c that will work.
  • OK OK ... just checking ... if I don't ask in advance 1 server turns into a rack. ;)
However, if you have a beefy box that is always on why not use it for a third host in the first place?
  • I could, but I'd rather do it in one shot - I haven't borked a pool in a while fiddling and would rather keep it that way. ;)
  • Also, I'm thinking I do this soon as if I build a box with 10 Gb built in (have to use 10G-BASE-T here / Cat5e), I don't need to buy a 10G-BASE-T NIC for the workstation (completely separate from prior discussion about Mellanox / DAC / etc).
You'll need to run ESX anyway for the witness vm. Although now that I type this, I am not sure. Have never tried running that on Player/Workstation, might actually work.
O/C a third vsan box would need the same storage HW.
  • Let me restate for clarity ... physical host #3 = ESXi w/ witness appliance VM + Win10 VM (takes place of today's workstation).
  • I may be way off target here (bear in mind I'm still very green with vsphere), and maybe this isn't even possible, but if you have a box which ESXi is installed on, and configure autostart for Win10 (as example, replace today), and have GPU passed through, USB devices attached, blah blah, you aren't going to see DCUI on your monitors, it would be as if ESXi isn't even there, right? I was thinking that may be preferable to Workstation Pro.
  • But there are multiple ways to skin this cat, I believe, you could nest your witness on another host achieving the 3 hosts needed on 2 physical hosts if you wanted (obv disadvantages to that).
  • If the 3rd host is your witness, it doesn't need the same storage (I thought).
[maybe we take this convo to PM as to not thread jack???]
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
I can confirm I got mine with decent packaging - bubble wrap and a padded soft envelope.

They are rev A2, and are detected just fine under windows 10, and also Vmware ESXi 6.7 U1.
I haven't really had the time to try and configure them at all, apart from plugging them in :(

I don't get a link UP using a MELLANOX MC2206130-00A, but I haven't done any configuration at all, so I'm not surprised.
I think that cable should work, as long as I set things up properly? Any advice there?
Otherwise I'll just grab a different cable :)
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
I got one A2 card and one A4 card from "usedservers-com*". Came with LP brackets, ordered some regular brackets from China...ETA unknown.

Possibly dumb question: Is a QSFP cable a QSFP cable, or are there differences between, say, a "3 GB/s" QSFP cable from a SAN (specifically, EMC Amphenol 038-003-703 85" 2.2M 3GB/s QSFP to QSFP Male to Male Cable Black | eBay) and one from Mellanox like zer0sum linked?
er... NetApp 112-00177 X6558-R5 2M SAS QSFP-QSFP External SAS Cable - 10ft. | eBay

~$8 shipped and works perfectly.