FreeNas/TrueNas RDMA Support (FR for voting)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Hm I see, that indeed does not necessarily sounds attractive.
So either I'd need to wait until (if ever) ESXi runs NFSoRDMA or move to iSCSI after all.

I have done this with Ubuntu, SPDK and ESXi with zvol backing, it's not an easy "snap your fingers" setup, and the gains vs iSCSI iSER are basically nonexistent, since the bottleneck seems to be the zfs file system/zvol, rather than the network protocol running over RDMA.
Does that imply that you're able to reach similar speeds remotely as locally on running say fio on a datastore/zvol? That would be a significant improvement then in my eyes?
 

Connorise

Member
Mar 2, 2017
75
17
8
33
US. Cambridge
iSER is a dead-end. Even if the target is added, there are still issues with iSER Initiators. Most of the vendors just simply switched to NVMe-oF things and as far as I can see, there are no plans to continue to evolve iSER at all.
 
  • Like
Reactions: BoredSysadmin

tsteine

Active Member
May 15, 2019
167
83
28
Does that imply that you're able to reach similar speeds remotely as locally on running say fio on a datastore/zvol? That would be a significant improvement then in my eyes?
I cannot say, I never tested fio against the zvol locally on the machine, I simply tested exposing a zvol over iSER and nvmeof without seeing any difference in peak 4k iops, whether this was a bottleneck on the ESXi server or the zvol/zfs server is actually not clear. (zfs server was a 10 core intel xeon with quad channel memory @ 2666 mhz cl19 ecc 256gb ram running ubuntu with openzfs 2.0)

I did see pretty spectacular peak sequential transfers on a windows VM though. I should note i had disabled sync writes completely on the ZFS datasets, as i was testing exposing zfs zvol over the iser protocol, rather than the underlying storage devices.

For reference.

zfs iser.JPG
 
  • Like
Reactions: Rand__

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
So either I'd need to wait until (if ever) ESXi runs NFSoRDMA or move to iSCSI after all.
I think you will be dissapointed even if ESXi supports NSFoRDMA - ESXi mounts its nfs shares with sync writes, meaning every single write is flushed towards the underlying ZFS dataset.

This is why NFS and ESXi while easy will never be as performant as iSCSI
- and I fail to see how ESXi would change this even if NFS were running over RDMA - it would probably still request sync writes.

So the only real option you have if you want to performance tune ESXi and NFS is to make your sync writes as fast as you can, which means extremely fast SLOG and even then it might never be fast.

Meaning, you will never get the same speed as if you were writing locally on the server.

I have always run ESXi via NFS and been tuning forever and at some point I just accepted that it was "slow" but good enough. But I would just switch to iSCSI - I know its nice to be able to just copy a VM configuration and disks, but in reality that is what you have VEEAM for - to make backups of your VM's and if you stop looking at being able to copy a VM configuration and disks as a requirement then iSCSI suddenly looks much more attractive, since you can have failover and it should perform much better.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
(I case u never read it - https://forums.servethehome.com/ind...-up-or-the-history-of-my-new-zfs-filer.28179/ ;))

Long story short I run NVDimms on my filer, with 2 pairs of PM1725a's. I get 3.0GB/s+ (aggregated) on moving multiple vms to the boy, so NFS is fine for me (v4, multipathed). O/c not aggregated the performance is worse and that's where I'd hope RoCE would help speeding things up (by reducing an individual transactions latency).

I have no idea whether it would meet the expectations, but given what I have seen with iWarp on earlier attempts it very well might work.
If I had a 5th Chelsio T6200 (https://forums.servethehome.com/index.php?threads/us-eu-wtb-chelsio-t62100-lp-cr.36241/) I'd just moved everything over to iWarp since that's supposed to work on TNC...