I got my Mellanox Infiniband Mezz cards installed and operational tonight running on Server 2012 on 2 nodes of a C6100 (Each node 2x L5520, 24GB RAM). I used the guide from
a post on Jose Barreto's Blog.
This is a simple direct connection, Peer to Peer test of basic connectivity. I didn't attempt to get into the whole Hyper-V over SMB thing.
From memory my installation steps were as follows ( I bet I missed something ) ....
On both nodes ..
1) installed hardware & powered up
2) installed the Server 2012 WinOF VPI from
here
The software complained about the firmware being old (or something like that). When going through the same steps on a different test box with a Mellanox MHQH19B card, the latest firmware was automatically downloaded and installed. Not so in this case.
3) Downloaded the Dell C6100 specific firmware from
here
4) Reboot
5) From a CMD shell, manually installed the fw
"flint -d mt26428_pciconf0 -i fw-ConnectX2-rel-2_9_1000-059MP7.bin burn"
6) Reboot
On node #1
7) Using Powershell, setup opensm as a service..
SC.EXE delete OpenSM
New-Service –Name "OpenSM" –BinaryPathName "`"C:\Program Files\Mellanox\MLNX_VPI\IB\Tools\opensm.exe`" --service -L 128" -DisplayName "OpenSM" –Description "OpenSM" -StartupType Automatic
Start-Service OpenSM
On both nodes
8) Control Panel -> device manager -> system devices -> Mellanox ConnectX -> Properties -> Port Protocol > select IB for both ports
9) Disable IPV6 and manually set the IPV4 IP addresses to a subnet other than Ethernet (192.168.1.x). Set Mellanox Adapter IP to 192.168.10.1 and 192.168.10.2 on one node, 192.168.10.3 and 192.168.10.4 on the other node.
10) may have done a reboot here ... can't remember
11) Connected a single QSFP/QSFP cable between one port on each node.
12) ping to verify connection over the 192.168.10.x subnet
At this point I had basic connectivity, green and yellow LED's on the mezz cards ....
13) Port information reported from Control Panel -> device manager -> network adapters -> Mellanox ConnectX-2 -> Properties -> Information reports as follows ( gotta love that link speed !! ) ..
14) Here's the interesting thing, Get-NetAdapterRDMA reports that RDMA is enabled ..
15) Setup a StarWind RamDisk on each node, setup a share for each RamDisk so that each node could access local and remote RamDisks.
15) On Node 1, run Atto on the local RamDisk ..
16) On node 1, run Atto onto the remote RamDisk (mapped as "Z" ). At this time, node 1 and 2 are connected over Ethernet and IB. SMB 3.0 is apparently smart enough to figure out that the IB path is the fastest and use it over Ethernet...
17) Just for giggles, pull the QSFP cable and run Atto again on the remote RamDisk ..
There's my old friend, the 120 MB/s bandwidth limit that Gigabit Ethernet poses on Array to Array backups. It's pretty neat, though, that SMB 3.0 simply fell back to the slower connection automatically.
Plugged the QSFP cable back in, and after a few secs the connection is alive again.
So RamDisk to RamDisk copies are just crazy fast over IB, copying 1.5GB sized .mv4 files peaking at about 950MB/s ...
Probably old news to a lot of folks on this forum, but this is my first dabble with IB and I can't believe that I've missed something this fast for so long.
I'm hooked, but in the foreseeable future I'm never really going to be able to saturate this kind of a link
