RDMA on Napp-it/OmniOS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Okay, so after about a month of running FreeNAS to serve out iSCSI on my zaid1 array to my VM host, I came to the conclusion that while the setup is decent, it is by no means sufficient for down-the-road (and certainly don't teach me any new skills) So I started looking into improving file server performance.

So here's what I had in mind, to be carried out in steps:
a) Benchmark existing ESXi datastore performance via iSCSI (it's being served out via a large extent on a zpool)
b) Enable an NFS mount on the same zpool, setup an NFS mount to that zpool as another datastore
c) Copy the VM data from the iSCSI extent to the NFS mount, benchmark NFS mounted datastore and see if NFS yields improvements versus iSCSI
d) Migrate existing 10GbE infrastructure to 40GbE by swapping out existing SolarFlare cards to Mellanox ConnectX2, and see if increased bandwidth helps with both iSCSI and NFS (I strongly doubt it)

So this might sound like a viable exercise, I would like to look into more things, such as:
- Enabling iSER (iSCSI over RDMA) or NFS over RDMA (both of which are supported by the Mellanox cards)

The problem that I have is one of limited capacity to test RDMA transfers properly (4 RAIDed HDDs can only push about 200MBytes/sec, while 2 SATA-III SSDs can only push 1GB/sec. My guess is that I'll have to create a small RAM drive and see how fast I can get it to ingest/traverse data while I am playing with it) - of course, that does assume the iSCSI/NFS target can support RDMA at all. Considering how popular FreeNAS seems to be, it doesn't support iSER - so I might have to switch to Napp-it on OpenIndiana Hipster or OmniOS, since both have support for COMSTAR (which can do RDMA), while FreeBSD does not. Has anyone messed with RDMA in particular while playing with Napp-it?

I was thinking of doing the following to test RDMA:
- Setup a pair of t730 Thin client with Mellanox cards - both with M.2 SATA SSDs
- Setup one machine for Napp-it on OmniOS and configure a RAM disk, then configure RDMA via COMSTAR
- Setup the other machine for SCST so the initiator will work with the Mellanox RDMA drivers to facilitate faster transfer.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
I have not done this but make asure that you use the same sync settings for iSCSI and NFS
(NFS sync always = iSCSI writebackcache off). If you use a zvol as base for a LUN you can force sync for iSCSI (and NFS) via the underlying filesystem when you switch the ZFS property from sync=default to off or always.

If this is noncommercial you can also try Solaris 11.4, mostly the fastest and most feature rich ZFS.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Hm....yeah. I found it really difficult to understand why anyone will use Napp-it on anything but a Solaris clone. I mean, ZFS is practically bulletproof there, plus you get COMSTAR functionality.
 

groove

Member
Sep 21, 2011
90
31
18
Hi WANg,

Did you manage to make any progress on this? I am trying to setup something similar - going from ESXi 6.7 (using the builtin iSer initiator over inbox Connectx-3 driver) to a Solaris 11.4 target (again over Connectx-3). It looks like Solaris does not support iSer over Ethernet - only Infiniband - and ESXi only support iSer over Ethernet and not over Infiniband.

Wondering if you have managed to make this work.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Hi WANg,

Did you manage to make any progress on this? I am trying to setup something similar - going from ESXi 6.7 (using the builtin iSer initiator over inbox Connectx-3 driver) to a Solaris 11.4 target (again over Connectx-3). It looks like Solaris does not support iSer over Ethernet - only Infiniband - and ESXi only support iSer over Ethernet and not over Infiniband.

Wondering if you have managed to make this work.
Not yet.
One thing that I realized is how underutilized is the iSCSI mounted ESXi datastore compared to the CIFS NAS sharing the zpool (300GB used in an 8TB extent, 12GB available to the zpool, which leaves only 3.5TB for the NAS side) - frankly, some of the VMs are disposable, and others might benefit from a choice of having its VMDKs hosted on faster (but not expensive-fast) storage. I am projecting only about 2TB datastore usage in total for the next 2-3 years.

Improving the NAS performance overall will be a multi-leg project for me, which looks something like this:

a) Copying the NAS data off the existing zpool (I harbor no illusion that this will be quick). Rsync some of the more "mission critical" stuff to the mini-NAS attached to my Wireless router so I can work on the other stuff without impacting regular tasks. Then I'll have to shut down the VMs and copy out the contents of the datastore via VMWare converter or scp - This will be faster.

b) Shut down the hypervisor, add more local storage (HP t730 thin client) and then migrate the boot volume to a USB3 thumb drive (right now the 32GB M.2 SATA SSD is dedicated solely to booting - I want to have a 180-256GB M.2 SATA SSD in there to house the VMs that needs to survive the NAS going down).

c) If the migration is successful, clone the thumb drive and migrate that to VMWare ESXi 6.7 (just in case VMWare can do ISER on IB later, or if ISER on Ethernet comes in the near future). Install the Mellanox 40GbE IB/Ethernet adapters in both the 6.5 and 6.7 configurations. The storage network will need to be reconfigured regardless in both cases.

d) Add a SATA SSD on my N40L (not sure how OpenIndiana Hipster / OmniOS copes with using a USB thumbdrive as boot volume, which is how FreeNAS is configured now), Install the OS - not sure how/if the existing zpool will be recognized.

e) Reconfigure the SAN to be a RAID10 instead of zraid1 (4x4 TB WD Blacks), and reconfigure the NAS/ESXi datastore split (right now I allocate more for the datastore than I do for the NAS, which is probably not wise)

Right now I am still on the procurement side of things (SSD for the NAS, SSD for the thin client) - I only have a $300/month discretionary budget at home for the lab, and I already spent it this month on other toys, so this will probably have to wait until after Black Friday.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Hi WANg,

Did you manage to make any progress on this? I am trying to setup something similar - going from ESXi 6.7 (using the builtin iSer initiator over inbox Connectx-3 driver) to a Solaris 11.4 target (again over Connectx-3). It looks like Solaris does not support iSer over Ethernet - only Infiniband - and ESXi only support iSer over Ethernet and not over Infiniband.

Wondering if you have managed to make this work.
I'm curious how you got this to work? I have a vsphere 6.7 box with a connectx-5 card back to back to another one on a CentOS 7.5 storage box (currently serving via NFS). I want to get iSER working, but when I followed the instructions on a vmware blog to create the iSER initiator, it never shows up. I get no errors, mind you, just no iSER storage adapter.
 

markpower28

Active Member
Apr 9, 2013
413
104
43
I'm curious how you got this to work? I have a vsphere 6.7 box with a connectx-5 card back to back to another one on a CentOS 7.5 storage box (currently serving via NFS). I want to get iSER working, but when I followed the instructions on a vmware blog to create the iSER initiator, it never shows up. I get no errors, mind you, just no iSER storage adapter.
Which procedure did you follow?
 

markpower28

Active Member
Apr 9, 2013
413
104
43
Have you enabled iscsi software adapter yet? Which version firmware do you have? Why the links shiws 2.5 GB?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Yes. The standard iSCSI software adapter does show up in the vsphere client. Firmware is 16.23.1020 (it didn't work before upgrading the firmware either.) I have no idea why the speed shows 2.5gb?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
The displayed speed seems like a bug of some sort. I get well over 20gb/s running iperf...
 

dswartz

Active Member
Jul 14, 2011
610
79
28
No. I only had need to connect two hosts, so I couldn't justify $$$ for a switch. Maybe there is some issue there, but I'd think that would prevent traffic from working at all, no? Everything works just fine as long as I am not trying to do iSER. Maybe vsphere is getting confused due to some setting? If so, I have no idea what. I would rather use NFS over RDMA, but apparently Mellanox no longer supports that (it supposedly works with inbox drivers, but I haven't tried that yet...)
 

markpower28

Active Member
Apr 9, 2013
413
104
43
I don't have connectX 5, but I do have 3 node cluster with connextX 3 with iSer with performance on par with SRP. Hard to troubleshoot without switch in between
 

groove

Member
Sep 21, 2011
90
31
18
Hey guys,

I'm still working on this. For me the ESX side came up just fine. I too have just CX3 cards on both the ESX server and the storage side. Where I'm running into issues is on the storage side. I tried using Solaris 11.4 - but it looks like Solaris only supports iSer over IB - not Ethernet. Have been running SRP and iSer on IB a couple of years now. But since VMWare only supports Ethernet, would really like to get iSer over Ethernet (ROCE) going on the storage side as well.

I then tried to setup Ubuntu server. But that did not work - installation aborts with an error stating something to the fact that zpool command not found.

I am now trying FreeBSD - A recent talk on BSDCan :


seems to indicate the iSer has been built into the latest version of FreeBSD.

I'd much rather go with a Solaris variant if possible. My second preference would be FreeBSD (give me the option to move to FreeNAS in the future).

Would like to know the experience of others - gea - please chime in if you have any findings you can share on getting iSer target setup using an Ethernet adapter on any of the Solaris variants.

Any FreeBSD gurus out there - am working on setting up iSer. I was able to get the card setup and ping the initiator (ESXi server). But FreeBSD is very new for me. So working through the process of building the kernel with OFED. Any helpful links to build the kernel would be really appreciated.

Will be monkeying around with this over the next couple of week. If the above options don't pan out, will get ESOS a shot next.