RDMA on Napp-it/OmniOS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
@groove - what did you end up with? Got it going with ESXi/Solaris or FreeBSD (or neither)?

I've been looking into a similar setup (trying to present ZFS to ESXi via RDMA) but (from reading only) it looks like Linux is the only option to present ZFS to ESXi at the moment (>=6.7) (and then using iSER).

Although NFS over RDMA is also mentioned ("RDMA can be useful to accelerate many of the hypervisor services including; SMP-FT, NFS and iSCSI.") there are no explicit setup instructions are given.
So maybe it just works if you have RDMA (RoCE) capable adapters + switch, have not tried yet (switch as that is a prereq for RoCE/v2 which might explain why it never worked for @dswartz ).
On ZoL there seems to be an issue with NFS/RDMA though (Exporting a ZFS dataset over NFS over RDMA generates RDMA errors · Issue #6795 · zfsonlinux/zfs) so it looks like iSER it is for the time being.

iWarp would of course be an option for Chelsio adapters (running iSER or NVMEoF from Linux boxes)
 

groove

Member
Sep 21, 2011
90
31
18
I finally ended up going NFS over RDMA. My storage is Solaris 11.3. My transport is Infiniband and hypervisor is Proxmox VE. It’s been quite stable for the about a year now.
Let me know if you need more details.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Thanks.
Not sure I want to switch to Proxmox since I run VmWare Horizon (using PCoIP)... But at least its another option :)
 
Last edited:

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Thanks.
Not sure I want to switch to Proxmox since I run VmWare Horizon (using PCoIP)... But at least its another option :)
Spent last 2 days to build a small poc for iser.
Target is CentOS 8, with LIO
ESXi 6.7U3

iser is working fine with direct connection. Next step is to go through ICX 7250 which support PFC.
 

fossxplorer

Active Member
Mar 17, 2016
556
98
28
Oslo, Norway
Are you using Mellanox cards? If so, which cards? I have some ConnectX-3s to test out, but i wonder about support for these in PVE 6.x. Otherwise i'm thinking to use oVirt as i imagine support for these cards from Mellanox is better on RHEL/CentOS.

I finally ended up going NFS over RDMA. My storage is Solaris 11.3. My transport is Infiniband and hypervisor is Proxmox VE. It’s been quite stable for the about a year now.
Let me know if you need more details.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Holy

Got SCST working as well. Same backing store, SCST iSER seems to have much better performance than LIO.
On average, twice the performance of iSCSI on throughput and random IO with very lower latency (less than 0.2 ms on 4K and 8K io).

Need to figure out PFC on ICX 7250 and move the connection to the switches next. Love Solaris ZFS but Linux seems very interesting now.

Love Solaris ZFS but now it's very interesting.
 

groove

Member
Sep 21, 2011
90
31
18
Are you using Mellanox cards? If so, which cards? I have some ConnectX-3s to test out, but i wonder about support for these in PVE 6.x. Otherwise i'm thinking to use oVirt as i imagine support for these cards from Mellanox is better on RHEL/CentOS.
yes, I am using Mellanox ConnectX-3 cards. My infiniband switch is an IS5030 - 32 port switch. I used to run the SM manager on the switch initially but have moved that over to my Proxmox host - so that I can run a newer version of the subnet manager. I have not switched to Proxmox 6.x yet but am is the process of setting up a new box to start testing it out.