10Gb SFP+ single port = cheaper than dirt

marcoi

Well-Known Member
Apr 6, 2013
1,400
223
63
Gotha Florida

sgilb

New Member
Jun 5, 2017
2
0
1
65
Also vey easy to get working in Windows. I use one of these in my main desktop. $20 is great!
Looking for drivers for these for Windows 8.1. If anyone can point me to one would be most helpful. Have it them working right now with a driver from a 10Gtec version but trying to check for others. also anyone have any firmware update info?
thanks
 

i386

Well-Known Member
Mar 18, 2016
2,035
533
113
31
Germany
Drivers are available on the mellanox site, download the winof package.
The latest firmware is available in a package from HP, I can post the link later.
 

sgilb

New Member
Jun 5, 2017
2
0
1
65
"Drivers are available on the mellanox site, download the winof package.
The latest firmware is available in a package from HP, I can post the link later."

Thanks - if you could send a link it would be great.
 

Rand__

Well-Known Member
Mar 6, 2014
4,571
910
113
Slightly OT: Whats the current 10gbe/sfp+ recommendation for ESX 6.5? The Connectx-2 cards wont work unfortunately and -3 are still relatively expensive...
 

Rand__

Well-Known Member
Mar 6, 2014
4,571
910
113
Really? I'll check that. I thought I had connected one once and it was not detected by the new driver model. And new and old driver can't coexist iirc. But I might be able to remove new model and use only old one ... hmm.
Will have to try that, thanks :)

Edit:
Indeed it works :)

Quick steps just for documentation shamelessly taken from @inbusiness :

Run this (from Which ESXi driver to use for SRP/iSER over IB (... | Mellanox Interconnect Community)

* disable native driver for vRDMA - this is very buggy

esxcli system module set --enabled=false -m=nrdma

esxcli system module set --enabled=false -m=nrdma_vmkapi_shim

esxcli system module set --enabled=false -m=nmlx4_rdma

esxcli system module set --enabled=false -m=vmkapi_v2_3_0_0_rdma_shim

esxcli system module set --enabled=false -m=vrdma



* uninstall inbox driver - also useless function that can't support ethernet iSER properly

esxcli software vib remove -n net-mlx4-en

esxcli software vib remove -n net-mlx4-core

esxcli software vib remove -n nmlx4-rdma

esxcli software vib remove -n nmlx4-en

esxcli software vib remove -n nmlx4-core

esxcli software vib remove -n nmlx5-core

Reboot,
Get package as described here:
ConnectX-2 and ESXi 6.0
http://www.mellanox.com/downloads/Software/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

* install Mellanox OFED 1.8.2.5 for ESXi 6.x.

esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

reboot
get this
upload_2017-6-6_20-58-54.png


O/c will be a pain to update since this will not work due to conflicts, but thats the price I gues.
 
Last edited:
  • Like
Reactions: _alex

Ellwood

Member
Nov 20, 2016
33
11
8
41
Yeah, if anyone knows an easy way to rename that.... let me know as I have the same issue. Supposedly host profiles, but I only have one esxi host, so I can't put it in maintenance mode and update with vcenter host profile (that I'm aware of)
 

William

Well-Known Member
May 7, 2015
785
250
63
63
Picked up a couple of these after seeing this post.
Installed one in my main rig... ASUS Z10PE-D16 - Windows 10 Pro... it just worked... 10Gbe, Woot :)
I have the other in a 4-Bay NAS which I have to get setup to test... <crosses fingers> it works.
I am going to grab a couple more to have on hand.

Thanks for posting this !
 

Jannis Jacobsen

Active Member
Mar 19, 2016
350
72
28
42
Norway
Is this adapter working properly in Windows Server 2016?
I need rdma (roce) support for a storage spaces direct 2-node hyperv cluster.

-jannis
 

rune-san

Member
Feb 7, 2014
78
15
8
Is this adapter working properly in Windows Server 2016?
I need rdma (roce) support for a storage spaces direct 2-node hyperv cluster.

-jannis
The Windows Server 2016 driver only works with ConnectX-3 / ConnectX-3 Pro cards for WinOF, and Connect4 and up for WinOF-2.
 

i386

Well-Known Member
Mar 18, 2016
2,035
533
113
31
Germany
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
 
  • Like
Reactions: Jannis Jacobsen

rune-san

Member
Feb 7, 2014
78
15
8
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
As a basic card I figured that worked, but what about RDMA? I remember specifically a Microsoft guy saying that ConnectX-2 could be buggy at times for SMB Direct because it does not properly support Priority Flow Control, one of the two necessary components for RoCE to work. I was told that's why the ConnectX-2 cards would often drop out of RDMA and back to standard TCP. That was some years ago though. If it's been rectified to the point it works without dropping out (save for maybe extreme link load) that would be awesome :)
 

i386

Well-Known Member
Mar 18, 2016
2,035
533
113
31
Germany
Rdma was added in a firmware version that was only available as a mlx file on the mellanox website or in a package from hp, it was never shipped as a binary.
Can't say if it really works as I only have one pc with windows 10 (windows client os doesn't support rdma!) + connectx2 and one server with a connectx3 card.
 
  • Like
Reactions: Jannis Jacobsen

CobaltFire

New Member
Nov 7, 2015
18
0
1
37
Server 2016 supports them out of the box o_O
If you want to have more configuration options install the winof package, it works without problems with the connect-x2 cards.
Can confim, running a Connectx-2 on a Windows Server 2016 machine. Worked straight out of the box.
 

rune-san

Member
Feb 7, 2014
78
15
8
Can confim, running a Connectx-2 on a Windows Server 2016 machine. Worked straight out of the box.
Indeed, as noted, the Inbox driver works fine for base functionality. What the OP still needs confirmation of, and I'd be curious about of, is if RoCE works on the adapters. Without that, the OP won't be able to use these adapters for what he wants effectively (Storage Spaces Direct). According to i386, there appears to have been *some* RDMA support released for one vendor at one time through non-official means. I would be curious to know if anyone is running that particular version and can confirm Priority Flow Control is properly functioning (in other words, that it's not falling out of RDMA under load). Otherwise, while the card will certainly work in 2016, it won't do what the OP wants to do with it. It will just be a basic TCP/IP Ethernet card.