Good to know, thanks!!er... NetApp 112-00177 X6558-R5 2M SAS QSFP-QSFP External SAS Cable - 10ft. | eBay
~$8 shipped and works perfectly.
Good to know, thanks!!er... NetApp 112-00177 X6558-R5 2M SAS QSFP-QSFP External SAS Cable - 10ft. | eBay
~$8 shipped and works perfectly.
There are different qsfp revisions with different bandwidths:Possibly dumb question: Is a QSFP cable a QSFP cable, or are there differences between, say, a "3 GB/s" QSFP cable from a SAN (specifically, EMC Amphenol 038-003-703 85" 2.2M 3GB/s QSFP to QSFP Male to Male Cable Black | eBay) and one from Mellanox like zer0sum linked?
er... NetApp 112-00177 X6558-R5 2M SAS QSFP-QSFP External SAS Cable - 10ft. | eBay
~$8 shipped and works perfectly.
Now I'm confused. Those $8 cables say "qsfp" with no "+".. will they work @ 40 gb/s with connectx-3 cards? Would there be any practical difference between qsfp and qsfp+ cables for connectx-3?There are different qsfp revisions with different bandwidths:
qsfp (up to 4x 1gbit/s ethernet, ddr infiniband)
qsfp+ (up to 40gbit/s ethernet, qdr infiniband)
Well...when I said "works perfectly", I didn't pull it out of my rear end... I have the exact cables connected to the exact same card to the 40gb ports on a 6610 and they "work perfectly"Now I'm confused. Those $8 cables say "qsfp" with no "+".. will they work @ 40 gb/s with connectx-3 cards? Would there be any practical difference between qsfp and qsfp+ cables for connectx-3?
Most sellers on ebay don't write the correct revision, I would recommend verifying over part numbers.Those $8 cables say "qsfp" with no "+"
[root@DarkESXi:/opt/mellanox/bin] esxcli network nic get -n vmnic3
Advertised Auto Negotiation: true
Advertised Link Modes: 1000None/Half, 1000None/Full, 10000None/Half, 10000None/Full, 40000None/Half, 40000None/Full, Auto
Auto Negotiation: false
Cable Type:
Current Message Level: -1
Driver Info:
Bus Info: 0000:03:00:0
Driver: nmlx4_en
Firmware Version: 2.42.5000
Version: 3.16.11.6
Link Detected: false
Link Status: Down
Name: vmnic3
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: true
Pause TX: true
Supported Ports:
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: false
Transceiver: internal
Virtual Address: 00:50:56:50:80:b3
Wakeon: None
I'm a total noob when it comes to Infiniband stuff, and just recently tried to play with this tech for the 1st time myself, so maybe it's obvious, but still - is your subnet manager up and running? It wasn't obvious to me when I tried these connectx-3 cards and the link wouldn't go up by itself. Apparently you need to run subnet manager software somewhere (on the clients, or the switch, for example), and since you didn't mention it in your post, I suppose there's a chance you're not running any.Well...it seems like maybe my cable isn't the compatible or something as I can't get a link up even with both cards flashed to the most recent firmware etc.
I switched my cards over to ethernet mode on ESXi and Windows...I'm a total noob when it comes to Infiniband stuff, and just recently tried to play with this tech for the 1st time myself, so maybe it's obvious, but still - is your subnet manager up and running? It wasn't obvious to me when I tried these connectx-3 cards and the link wouldn't go up by itself. Apparently you need to run subnet manager software somewhere (on the clients, or the switch, for example), and since you didn't mention it in your post, I suppose there's a chance you're not running any.
/opt/mellanox/bin/mlxconfig -d mt4099_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2
Windows > Device Manager > system devices > Mellanox... > Port Protocol tab
These do not exist at this time. In another 2-3 years yes. As of right now the sheer amount of power 40gb needs is the bottleneck. Mikrotek has switches that are 'small' but still take power to use.I would like a small switch, that consumes as little power as possible and yet enable 40gb ethernet. Any thing cheaper than Arista?
If you want to just skip the switch you can do Point to point cabling and connect your servers together over SAS cables instead. Same speed said:Is it difficult to configure for point to point cabling between 3 esxi servers?
You are right. 10GB is plenty.