FreeNas/TrueNas RDMA Support (FR for voting)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,660
1,783
113
Hm I see, that indeed does not necessarily sounds attractive.
So either I'd need to wait until (if ever) ESXi runs NFSoRDMA or move to iSCSI after all.

I have done this with Ubuntu, SPDK and ESXi with zvol backing, it's not an easy "snap your fingers" setup, and the gains vs iSCSI iSER are basically nonexistent, since the bottleneck seems to be the zfs file system/zvol, rather than the network protocol running over RDMA.
Does that imply that you're able to reach similar speeds remotely as locally on running say fio on a datastore/zvol? That would be a significant improvement then in my eyes?
 

Connorise

Member
Mar 2, 2017
75
17
8
33
US. Cambridge
iSER is a dead-end. Even if the target is added, there are still issues with iSER Initiators. Most of the vendors just simply switched to NVMe-oF things and as far as I can see, there are no plans to continue to evolve iSER at all.
 
  • Like
Reactions: BoredSysadmin

tsteine

Active Member
May 15, 2019
181
90
28
Does that imply that you're able to reach similar speeds remotely as locally on running say fio on a datastore/zvol? That would be a significant improvement then in my eyes?
I cannot say, I never tested fio against the zvol locally on the machine, I simply tested exposing a zvol over iSER and nvmeof without seeing any difference in peak 4k iops, whether this was a bottleneck on the ESXi server or the zvol/zfs server is actually not clear. (zfs server was a 10 core intel xeon with quad channel memory @ 2666 mhz cl19 ecc 256gb ram running ubuntu with openzfs 2.0)

I did see pretty spectacular peak sequential transfers on a windows VM though. I should note i had disabled sync writes completely on the ZFS datasets, as i was testing exposing zfs zvol over the iser protocol, rather than the underlying storage devices.

For reference.

zfs iser.JPG
 
  • Like
Reactions: Rand__

Bjorn Smith

Well-Known Member
Sep 3, 2019
901
497
63
50
r00t.dk
So either I'd need to wait until (if ever) ESXi runs NFSoRDMA or move to iSCSI after all.
I think you will be dissapointed even if ESXi supports NSFoRDMA - ESXi mounts its nfs shares with sync writes, meaning every single write is flushed towards the underlying ZFS dataset.

This is why NFS and ESXi while easy will never be as performant as iSCSI
- and I fail to see how ESXi would change this even if NFS were running over RDMA - it would probably still request sync writes.

So the only real option you have if you want to performance tune ESXi and NFS is to make your sync writes as fast as you can, which means extremely fast SLOG and even then it might never be fast.

Meaning, you will never get the same speed as if you were writing locally on the server.

I have always run ESXi via NFS and been tuning forever and at some point I just accepted that it was "slow" but good enough. But I would just switch to iSCSI - I know its nice to be able to just copy a VM configuration and disks, but in reality that is what you have VEEAM for - to make backups of your VM's and if you stop looking at being able to copy a VM configuration and disks as a requirement then iSCSI suddenly looks much more attractive, since you can have failover and it should perform much better.
 

Rand__

Well-Known Member
Mar 6, 2014
6,660
1,783
113
(I case u never read it - https://forums.servethehome.com/ind...-up-or-the-history-of-my-new-zfs-filer.28179/ ;))

Long story short I run NVDimms on my filer, with 2 pairs of PM1725a's. I get 3.0GB/s+ (aggregated) on moving multiple vms to the boy, so NFS is fine for me (v4, multipathed). O/c not aggregated the performance is worse and that's where I'd hope RoCE would help speeding things up (by reducing an individual transactions latency).

I have no idea whether it would meet the expectations, but given what I have seen with iWarp on earlier attempts it very well might work.
If I had a 5th Chelsio T6200 (https://forums.servethehome.com/index.php?threads/us-eu-wtb-chelsio-t62100-lp-cr.36241/) I'd just moved everything over to iWarp since that's supposed to work on TNC...
 

CoryC

New Member
Jan 24, 2025
9
5
3
Hey all, anyone still pursuing this in 2025?

Rand and I have chimed in on the new TN forums page, but they still seem dead set on not being interested in making it available.
( Support plans related to RoCE in storage protocols (iSER, NVMe-oF, SMB Direct, NFSoRDMA) - TrueNAS General - TrueNAS Community Forums)

I have been testing with it using Scale and a Debian Init, against a BRD so I didn't have to use storage yet, but I don't like that the GUI/middleware kills the iSER and FC config out of SCST. I don't have a solution for that other than to go CLI only.

Any thoughts?
 

tsteine

Active Member
May 15, 2019
181
90
28
I ended up abandoning truenas scale and just rolling Ubuntu Server with ZFS, then setting up everything I need through CLI.

My conclusion ended up being that since I never logged into truenas unless something was wrong, that I might as well just configure everything I need myself. Once it is set up properly, it pretty much just works.

I see why this would suck though for users who are not technical and do not know how to properly set this up using a terminal.
 

gea

Well-Known Member
Dec 31, 2010
3,431
1,335
113
DE
NVMe-oF should work and ksmbd on Linux claims SMB Direct support - although I have not seen any success report. The deep integration of services into the TN web-gui may be a problem. You must use CLI what can give problems with the GUI.

This is why Proxmox is my favourite Linux, as VM platform, universal server and storageserver due the solid OpenZFS integration. For ZFS management you can add a 3rd party web-gui add on like Cockpit or my napp-it cs, a port from Solaris/Illumos that runs on Windows and can remotely manage OpenZFS servers like Free-BSD, OSX or Linux/Proxmox.

SMB Direct/RDMA is the real killer feature as SMB is what makes a NAS usable, on Windows with .vhdx files over SMB as a fast and zero config alternative to iSCSI - currenty still requiring a Windows Server edition for the SMB server part.
 
  • Like
Reactions: pimposh

Bjorn Smith

Well-Known Member
Sep 3, 2019
901
497
63
50
r00t.dk
I see why this would suck though for users who are not technical and do not know how to properly set this up using a terminal.
Users that are not technical - most likely do not care about this feature :) They most likely just want something easy that works every time.
 

CoryC

New Member
Jan 24, 2025
9
5
3
I see why this would suck though for users who are not technical and do not know how to properly set this up using a terminal.
Agreed. I did it through CLI since that was the only way.

The big problem is that the middleware
/GUI eats your config file, so the changes get dropped. I don't know if that was only during an iSCSI change, or every reboot. Neither are ideal, since startup scripts would be needed.

I didn't want to take the time to roll my own. I like not having to install all the supporting email stuff, etc. Guess I'm just gonna have to.

I keep looking through FreeBSD for iSER, but it seems to only support initiator. Anyone know otherwise? I can't tell from the man pages, but that's what I take from it. Plus Mellanox/nVidia didn't release OFED past BSD 12 I think.
 

tsteine

Active Member
May 15, 2019
181
90
28
Users that are not technical - most likely do not care about this feature :) They most likely just want something easy that works every time.
I agree, though I will say I have seen some people who have no idea what they are doing, hearing about "High performance features" who get irrationally hung up on getting that feature.
But yeah, to me, the truenas package really only provides convenience.
 

CoryC

New Member
Jan 24, 2025
9
5
3
Not me, I irrationally build enterprise systems in my basement, and try to push the limits of no vendor support to the max. Instead of running all open source like a good little home-labber, I'm running Cisco, VMware, Microsoft, Brocade FC, Mellanox/nVidia, etc all without support...

Next up, Mellanox SX6012 switch
 
  • Haha
Reactions: tsteine