10Gb SFP+ single port = cheaper than dirt

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sgilb

New Member
Jun 5, 2017
2
0
1
69
"Drivers are available on the mellanox site, download the winof package.
The latest firmware is available in a package from HP, I can post the link later."

Thanks - if you could send a link it would be great.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Slightly OT: Whats the current 10gbe/sfp+ recommendation for ESX 6.5? The Connectx-2 cards wont work unfortunately and -3 are still relatively expensive...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Really? I'll check that. I thought I had connected one once and it was not detected by the new driver model. And new and old driver can't coexist iirc. But I might be able to remove new model and use only old one ... hmm.
Will have to try that, thanks :)

Edit:
Indeed it works :)

Quick steps just for documentation shamelessly taken from @inbusiness :

Run this (from Which ESXi driver to use for SRP/iSER over IB (... | Mellanox Interconnect Community)

* disable native driver for vRDMA - this is very buggy

esxcli system module set --enabled=false -m=nrdma

esxcli system module set --enabled=false -m=nrdma_vmkapi_shim

esxcli system module set --enabled=false -m=nmlx4_rdma

esxcli system module set --enabled=false -m=vmkapi_v2_3_0_0_rdma_shim

esxcli system module set --enabled=false -m=vrdma



* uninstall inbox driver - also useless function that can't support ethernet iSER properly

esxcli software vib remove -n net-mlx4-en

esxcli software vib remove -n net-mlx4-core

esxcli software vib remove -n nmlx4-rdma

esxcli software vib remove -n nmlx4-en

esxcli software vib remove -n nmlx4-core

esxcli software vib remove -n nmlx5-core

Reboot,
Get package as described here:
ConnectX-2 and ESXi 6.0
http://www.mellanox.com/downloads/Software/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

* install Mellanox OFED 1.8.2.5 for ESXi 6.x.

esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

reboot
get this
upload_2017-6-6_20-58-54.png


O/c will be a pain to update since this will not work due to conflicts, but thats the price I gues.
 
Last edited:
  • Like
Reactions: _alex

Ellwood

Member
Nov 20, 2016
33
11
8
45
Yeah, if anyone knows an easy way to rename that.... let me know as I have the same issue. Supposedly host profiles, but I only have one esxi host, so I can't put it in maintenance mode and update with vcenter host profile (that I'm aware of)
 

William

Well-Known Member
May 7, 2015
789
252
63
66
Picked up a couple of these after seeing this post.
Installed one in my main rig... ASUS Z10PE-D16 - Windows 10 Pro... it just worked... 10Gbe, Woot :)
I have the other in a 4-Bay NAS which I have to get setup to test... <crosses fingers> it works.
I am going to grab a couple more to have on hand.

Thanks for posting this !
 

Jannis Jacobsen

Active Member
Mar 19, 2016
365
80
28
45
Norway
Is this adapter working properly in Windows Server 2016?
I need rdma (roce) support for a storage spaces direct 2-node hyperv cluster.

-jannis
 

rune-san

Member
Feb 7, 2014
81
18
8
Is this adapter working properly in Windows Server 2016?
I need rdma (roce) support for a storage spaces direct 2-node hyperv cluster.

-jannis
The Windows Server 2016 driver only works with ConnectX-3 / ConnectX-3 Pro cards for WinOF, and Connect4 and up for WinOF-2.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
 
  • Like
Reactions: Jannis Jacobsen

rune-san

Member
Feb 7, 2014
81
18
8
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
As a basic card I figured that worked, but what about RDMA? I remember specifically a Microsoft guy saying that ConnectX-2 could be buggy at times for SMB Direct because it does not properly support Priority Flow Control, one of the two necessary components for RoCE to work. I was told that's why the ConnectX-2 cards would often drop out of RDMA and back to standard TCP. That was some years ago though. If it's been rectified to the point it works without dropping out (save for maybe extreme link load) that would be awesome :)
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Rdma was added in a firmware version that was only available as a mlx file on the mellanox website or in a package from hp, it was never shipped as a binary.
Can't say if it really works as I only have one pc with windows 10 (windows client os doesn't support rdma!) + connectx2 and one server with a connectx3 card.
 
  • Like
Reactions: Jannis Jacobsen

CobaltFire

New Member
Nov 7, 2015
20
0
1
41
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
Can confim, running a Connectx-2 on a Windows Server 2016 machine. Worked straight out of the box.
 

rune-san

Member
Feb 7, 2014
81
18
8
Can confim, running a Connectx-2 on a Windows Server 2016 machine. Worked straight out of the box.
Indeed, as noted, the Inbox driver works fine for base functionality. What the OP still needs confirmation of, and I'd be curious about of, is if RoCE works on the adapters. Without that, the OP won't be able to use these adapters for what he wants effectively (Storage Spaces Direct). According to i386, there appears to have been *some* RDMA support released for one vendor at one time through non-official means. I would be curious to know if anyone is running that particular version and can confirm Priority Flow Control is properly functioning (in other words, that it's not falling out of RDMA under load). Otherwise, while the card will certainly work in 2016, it won't do what the OP wants to do with it. It will just be a basic TCP/IP Ethernet card.
 

escapen

New Member
Mar 28, 2018
2
0
1
40
I just ran the commands layed out in Rand's post and am wondering if someone can help me undo it... I disabled passthrough and now the NIC isn't showing up in the list of physical NIC's available to the host.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
You mean the uninstallation of drivers?
Have you installed the correct ones?
Is the card listed in the devices?
 

escapen

New Member
Mar 28, 2018
2
0
1
40
Sorry for the lack of info.

I followed the directions to disable modules, uninstall the default drivers and install the provided mellanox driver in post #169. I'd like to bring my ESXi back to default, presumably by uninstalling the provided driver and installing the default drivers/enabling modules.

I haven't been able to figure out how to do this... I basically tried to do what was laid out in your post, in reverse, but that didn't work. It asked me for a depot, that's where I got lost. I tried googling, but this is my first foray into ESXi in my homelab setting and my google skills are failing me.

Yes, the card is listed in devices, but is not appearing in the list of physical NICs.

Thanks for your help.

Edit: I managed to uninstall all of the individual Mellanox drivers. Now I'm just not sure which to install so that future updates to ESXi will be seamless.
 
Last edited: