ZFS Advice for new Setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mpogr

Active Member
Jul 14, 2016
115
95
28
53
Despite the encouraging information from @Gabriel Mateiciuc, I'm still not 100% sure the 1.8.2.5 drivers actually work on 6.5. According to my past experience, as well as other people's posts on Mellanox Communities and, most importantly, Mellanox themselves, they don't and they shouldn't. I'm a bit reluctant to give it another go, because my current environment is stable and I don't really have the time to fiddle with it.
Any volunteers to give it a whirl and post some solid results?
 
Apr 21, 2016
56
25
18
43
Hi, I'm doing some heavy tests lately - I'm kinda migrating most of the VM's to iscsi/scsi (yeah that sounds cool) from good old NFS.
Short answer, it works, definetly RDMA transfers. Long answer - it seems it needs some tinkering to get to full speed.
So far, the best solution for the target seems to be scst. I've been playing with that but it seems I've hit some issues that affect the linux environment - zfs zvol's exported as block devs. Writes are good, reads are crap. Same is for a 6.0 esxi initiator (I kept one node to compare against the 1.8.3)
@mpogr - Mellanox says that not all CX3's, for example QCBT (40QDR IB/10GBE specced) shouldn't do 56FDR IB/40GBE, but they do :) so I wouldn't trust them that much.

I'll keep on with this until I get some consistent results and post here. In the mean time, if anyone has input here - zfs zvols (it would be appropriate for this thread) and some target config for performance and maybe some tried and tested tricks for vmware initiator side - I guess they would be welcome.
 

mpogr

Active Member
Jul 14, 2016
115
95
28
53
I've just revisited this, since I was finally able to upgrade my vSphere Centre Appliance to 6.5 (previously had been getting errors all the time).
So I've just upgraded one of my ESXi hosts to 6.5U1 and re-added Mellanox SRP support using the process kindly described by @inbusiness (disabling vrdma was the key) and, voila, got my SRP storage working with ESXi 6.5 after all. So far so good. Will run it like this for a couple of days before upgrading the other two ESXi hosts.
 

mpogr

Active Member
Jul 14, 2016
115
95
28
53
Just finished upgrading the two other hosts to ESXi 6.5U1, finalising my full conversion to 6.5 (both ESXi and VC). So far so good, SRP over FDR IB working like a charm. Also, updated my CentOS server to 7.4 and MLNX OFED to the latest. Same with SCST trunk. So far so good...
 
Last edited:

epicurean

Active Member
Sep 29, 2014
785
80
28
@mpogr, could you detail exactly how you migrated to ESXI 6.5U1 and still kept SRP? Will it work with my VPI Connect X-2 adapters? I resisted going to 6.5 from current 6.0U3 as I thought 40GB IB configuration will be a nightmare
 

inbusiness

New Member
Jul 31, 2016
5
3
3
48
@mpogr, could you detail exactly how you migrated to ESXI 6.5U1 and still kept SRP? Will it work with my VPI Connect X-2 adapters? I resisted going to 6.5 from current 6.0U3 as I thought 40GB IB configuration will be a nightmare
Yes! You can migrate to ESXi6.5U1.
But you must create custom image that remove inbox driver from original image.
Then you will be migrate your ESXi host smoothly...:)
 

mpogr

Active Member
Jul 14, 2016
115
95
28
53

mpogr

Active Member
Jul 14, 2016
115
95
28
53
Still using 1.8.2.5, same way as with 6.0, only disabling a new component of 6.5 that was conflicting with it.
That said, there seems to be a new development in town, as Mellanox have just released an iSER driver for 6.5 supporting CX3. This might prompt me to try and change the entire setup from IB/SRP to EN/iSER. However, what is a bit alarming is the fact iSER requires RoCE. CX3 (not Pro) supports only RoCE v1, which supposedly has latency issues...
 
  • Like
Reactions: epicurean