IB, Napp-it and SRP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

markpower28

Active Member
Apr 9, 2013
413
104
43
The post is based on utilize Mellanox 4036 build-in subnet manager and only install IB driver 1.8.2.4 on vSphere 5.5/6.0

I used Solaris 11.2 express for ZFS platform. Then use the following configured IB interface.

IPoIB
dladm show-ib
dladm show-phys
dladm show-part
dladm create-part -l net3 -P 0x8001 p8001.ib1 (defined in 4036 subnet manager)
dladm show-
ipadm create-ip p8001.ib1
ipadm create-addr -a 10.x.x.x/24 p8001.ib1/ipv4 (iSCSI interface)

I always prefer GUI for easy management, so I installed napp-it on top of it.

Even Napp-it does not officially support IB(if I remember correctly), Comstar works perfectly with the iSCSI interface above. vSphere works fine with IBoIP but performance is not impressive at all. https://forums.servethehome.com/index.php?threads/ipoib-vs-srp.5048/

Then I start explorer with SRP option. I did not see any native option with napp-it. I did the following.

svcadm enable ibsrp/target
stmfadm list-target -v
stmfadm create-tg tgsrp
stmfadm add-tg-member -g tgsrp eui.0024E89097xxxxxx (ZFS server)
stmfadm create-hg hgsrp
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (ESXi host 1)
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (ESXi host 2)
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (ESXi host 3)
stmfadm add-view -h hgsrp -t tgsrp 600144F0685E4600000054F9ECxxxxxx

Then SRP target shows in napp-it Comstar GUI option.

After install IB 1.8.2.4 driver on vSphere 5.5/6. It recognize SRP target right the way. And performance is night and day (from 500 MB/s to 3000 MB/s)

I am using ConnectX2 VPI (2.10.700) from both vSphere and Solaris.
Subnet Manager is 4036 build-in SM.

*This is for a lab environment, if you plan to do this in production please make sure your ZFS solution support both IB and SRP.
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
Interesting. I just spun up a pair of mellanox connectx-2 cards: one on a vsphere 5.5 host (with the opensm sb manager) and one on a latest/stable omnios. I did not need to do any of the steps you described above. The only thing I needed to do was 'pkg install pkg:/driver/network/srpt' and it all 'just worked'...
 

markpower28

Active Member
Apr 9, 2013
413
104
43
Interesting. I just spun up a pair of mellanox connectx-2 cards: one on a vsphere 5.5 host (with the opensm sb manager) and one on a latest/stable omnios. I did not need to do any of the steps you described above. The only thing I needed to do was 'pkg install pkg:/driver/network/srpt' and it all 'just worked'...
I did not install opensm sb manager on the ESXi host. I used the build-in Subnet Manager on 4036
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Dunno, but mine is too. Vsphere shows it as "MT26428 ConnectX VPI - 10GigE / IB QDR ...." Maybe omnios set up differently?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Ah, that's different then. I just (for now) have a simple point to point link. I couldn't find anything about running a SM on omnios, so I threw it on the vsphere host...
 

markpower28

Active Member
Apr 9, 2013
413
104
43
Ah, that's different then. I just (for now) have a simple point to point link. I couldn't find anything about running a SM on omnios, so I threw it on the vsphere host...
Put the SM on the ESXi host does work, for this posting the steps are to utilize switch build-in SM and the only component installed on the ESXi host is IB 1.8.2.4 driver.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Even Napp-it does not officially support IB(if I remember correctly), Comstar works perfectly with the iSCSI interface above. vSphere works fine with IBoIP but performance is not impressive at all. https://forums.servethehome.com/index.php?threads/ipoib-vs-srp.5048/

Then I start explorer with SRP option. I did not see any native option with napp-it. I did the following.
I do not use IB in my own setups but with some community help (Frank) I have added some IB support in menu System >> Network IB in current napp-it 0.9f4
 
  • Like
Reactions: T_Minus

tjk

Active Member
Mar 3, 2013
481
199
43
Mark, this is good stuff, keep it coming! A couple questions and feedback:

What driver did VMware ship with vSphere 6 for the IB cards?

The 1.8.2.4 driver worked with vSphere 6 without problems?

What is your ZFS server config that allowed you to push 3000 MB/s?

Can you so SRP and still use the IB adapters and do IPoIB for vMotion traffic?

Is the stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (host 1) are these the ESX hosts I assume?

What do you have to do on the VMware side to see/format the VMFS storage over SRP?

I've had IPoIB working with various platforms (via iSCSI or NFS), but have never been able to get SRP working, was never able to get the client to see the storage.

I heard a rumor that the connect-x 2 cards will not be supported with newer drivers going fwd, hence I'm surprised they are still working with vSphere 6.

For folks wondering how to do the install, here is the process, not you have to uninstall the driver that ships with VMware first:

IB on VMware:

Download the 1.8.2.4 drivers here: Mellanox Products: Mellanox OFED Driver for VMware® ESXi Server

esxcli software vib remove -n=net-mlx4-en -n=net-mlx4-core
reboot h0st node

--- Install OFED ---
esxcli software vib install -d /tmp/MLNX-OFED-ESX-1.8.2.4-10EM-500.0.0.472560.zip --no-sig-check

--- Optional - Install MFT Tools ---
esxcli software vib install -v /tmp/MLNX-MFT-ESX-3.7.1.3-10EM-550.0.0.1331820.zip --no-sig-check

reboot

esxcli software vib list|grep Mel
esxcli software vib list|grep ml

Tom
The post is based on utilize Mellanox 4036 build-in subnet manager and only install IB driver 1.8.2.4 on vSphere 5.5/6.0

I used Solaris 11.2 express for ZFS platform. Then use the following configured IB interface.

IPoIB
dladm show-ib
dladm show-phys
dladm show-part
dladm create-part -l net3 -P 0x8001 p8001.ib1 (defined in 4036 subnet manager)
dladm show-
ipadm create-ip p8001.ib1
ipadm create-addr -a 10.x.x.x/24 p8001.ib1/ipv4 (iSCSI interface)

I always prefer GUI for easy management, so I installed napp-it on top of it.

Even Napp-it does not officially support IB(if I remember correctly), Comstar works perfectly with the iSCSI interface above. vSphere works fine with IBoIP but performance is not impressive at all. https://forums.servethehome.com/index.php?threads/ipoib-vs-srp.5048/

Then I start explorer with SRP option. I did not see any native option with napp-it. I did the following.

svcadm enable ibsrp/target
stmfadm list-target -v
stmfadm create-tg tgsrp
stmfadm add-tg-member -g tgsrp eui.0024E89097xxxxxx
stmfadm create-hg hgsrp
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (host 1)
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (host 2)
stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (host 3)
stmfadm add-view -h hgsrp -t tgsrp 600144F0685E4600000054F9ECxxxxxx

Then SRP target shows in napp-it Comstar GUI option.

After install IB 1.8.2.4 driver on vSphere 5.5/6. It recognize SRP target right the way. And performance is night and day (from 500 MB/s to 3000 MB/s)

I am using ConnectX2 VPI (2.10.700) from both vSphere and Solaris.
Subnet Manager is 4036 build-in SM.

*This is for a lab environment, if you plan to do this in production please make sure your ZFS solution support both IB and SRP.
 

markpower28

Active Member
Apr 9, 2013
413
104
43
Mark, this is good stuff, keep it coming! A couple questions and feedback:

What driver did VMware ship with vSphere 6 for the IB cards?
1.9.7 EN same as vSphere 5.5. It does not support IB out of box

The 1.8.2.4 driver worked with vSphere 6 without problems?
Correct. need to remove the following first
esxcli software vib remove -n=net-mlx4-en -n=net-mlx4-core -n=nmlx4-core -n=nmlx4-en -n=nmlx4-rdma


What is your ZFS server config that allowed you to push 3000 MB/s?
one node in C6100. 2 x L5630 24 GB memory, LSI 9207-8e and ConnectX2 - VPI
4 x 250 GB SSD - L2ARC. 8 x 450 15K SAS drive with 4 x mirror.
Solaris 11.2 express with napp-it.


Can you so SRP and still use the IB adapters and do IPoIB for vMotion traffic?
yes. it's because SRP shows under storage controller and IB traffic shows under networking. There is no conflict

Is the stmfadm add-hg-member -g hgsrp eui.0024E89097xxxxxx (host 1) are these the ESX hosts I assume?
Correct.

What do you have to do on the VMware side to see/format the VMFS storage over SRP?
Nothing.
SRP initiator (not sure the right naming) is based on hardware ID. It does not change. Once you present the view/LUN to SRP target. ESXi just saw it under storage adapter. No configuration is needed on the ESXi end.


I've had IPoIB working with various platforms (via iSCSI or NFS), but have never been able to get SRP working, was never able to get the client to see the storage.

I heard a rumor that the connect-x 2 cards will not be supported with newer drivers going fwd, hence I'm surprised they are still working with vSphere 6.
ConnectX2 is not supported for Windows after 4.8.0. And they have not updated driver in ESXi for a while.

For folks wondering how to do the install, here is the process, not you have to uninstall the driver that ships with VMware first:

IB on VMware:

Download the 1.8.2.4 drivers here: Mellanox Products: Mellanox OFED Driver for VMware® ESXi Server

esxcli software vib remove -n=net-mlx4-en -n=net-mlx4-core
reboot h0st node

--- Install OFED ---
esxcli software vib install -d /tmp/MLNX-OFED-ESX-1.8.2.4-10EM-500.0.0.472560.zip --no-sig-check

--- Optional - Install MFT Tools ---
esxcli software vib install -v /tmp/MLNX-MFT-ESX-3.7.1.3-10EM-550.0.0.1331820.zip --no-sig-check

reboot

esxcli software vib list|grep Mel
esxcli software vib list|grep ml

Tom
 

tjk

Active Member
Mar 3, 2013
481
199
43
Mark,

Solaris 11.2 I assume? Wasn't aware of an express version.

Anyone know how to get updates for a Solaris 11.2 lab server, it is free to run for personal use (or was), but I'm not aware how to get/apply patches to it?

Now, if there was just an easy/reliable way to build ZFS clusters for scale and fail-over, everything I've played with from FreeBSD/NAS to Nexenta makes fail-over so darn difficult and unreliable in my testing. A shared nothing architecture based on ZFS would be heaven!
 

markpower28

Active Member
Apr 9, 2013
413
104
43
11.2 express edition is free (no support)

For patch, I think you need a support contact for that.

For HA/Cluster, there are couple options out there. But personally I rather take a different route. Instead of rely on SAN to do HA/Fail-over, why don't we use application to handle that? In short, it will be. Two or more single node ZFS or MS SMB 3.0 server service VMs on JBOD. Then have VM to do replication across. Such as MS DFS servers in different SAN to handle file replication. MS Exchange DAG server in different SAN to handle mail replication. MS SQL AlwaysON in different SAN to handle SQL replication, etc. (I know we talk about Micrsoft here, but it works) Then utilize network load balancer such as NetScaler to handle the traffic. Then that will be an always on HA solution.

Or just use Hyper-V replication or VMware replication if 5 mins down time is ok.

The reality is, when we presenting the enterprise storage HA solutions as tire1 storage, software defined storage is not there yet. (but it does work :) )
 
Last edited:

markpower28

Active Member
Apr 9, 2013
413
104
43
I've had IPoIB working with various platforms (via iSCSI or NFS), but have never been able to get SRP working, was never able to get the client to see the storage.
This is possible due to subnet manager is not configured properly.
 

tjk

Active Member
Mar 3, 2013
481
199
43
For HA/Cluster, there are couple options out there. But personally I rather take a different route. Instead of rely on SAN to do HA/Fail-over, why don't we use application to handle that? In short, it will be. Two or more single node ZFS or MS SMB 3.0 server service VMs on JBOD. Then have VM to do replication across. Such as MS DFS servers in different SAN to handle file replication. MS Exchange DAG server in different SAN to handle mail replication. MS SQL AlwaysON in different SAN to handle SQL replication, etc. (I know we talk about Micrsoft here, but it works) Then utilize network load balancer such as NetScaler to handle the traffic. Then that will be an always on HA solution.
Sure, if most of the stuff I deal with was MSFT related, those are easy solutions. Example, I have one client that has several hundred VM's, all running Linux, app servers are redundant with LB's, DB servers (PGSQL) are replicating to passive slaves, but require manual intervention to get them up and running. The tax/cost to replicate all these VM's wouldn't work, hence HA storage and regular backups.

I also have several other clients with a dozen or less VM's, doing back office stuff that can't be setup in HA mode (crm, quickbooks, HR, etc), sitting on HA Equallogics and regular backups.

Tom
 

epicurean

Active Member
Sep 29, 2014
785
80
28
hi TJK,
If I wish to update my esxi 5.5 to 6.0u1, what are the are the mellanox related vib that must be removed/uninstall for the 6.0u1 update to go through?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Be warned, last I checked (3 months ago?), the Mellanox IB drivers for 6.0 did NOT support SRP :(