FreeNas/TrueNas RDMA Support (FR for voting)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tsteine

Active Member
May 15, 2019
167
83
28
I think the crux of the issue here is that the upstream FreeBSD does not support iSER target functionality (yet)
so they would have to invest time in not only adding support for this in their gui and the freenas/truenas distribution, but also implement iSER in the freebsd iscsi target software.

I think it would be far more likely to see RDMA iSER support in TrueNAS if it's first implemented into the upstream FreeBSD iscsi target, and becomes an issue of simply adding support in FreeNAS/TrueNAS, instead of actually having to develop the functionality for the target software.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well you are correct o/c but they are looking for ways to improve FreeNas/TrueNas in the medium term - RDMA is nothing I'd expect any time soon.

But imho it's *the* next step they should take to get usable speed to the next level and it would be great if they at least saw some requests for that :)
 

tsteine

Active Member
May 15, 2019
167
83
28
Don't get me wrong here, I would love nothing more than to see RDMA support and iSER in TrueNAS. Once you pass 10gbit, RDMA starts becoming a necessity to keep throughput high and cpu overhead low.

the somewhat obscure point I was making was that it might be more fruitful to go about getting RDMA and iSER support in FreeBSD through the open source community, rather than getting IXSystems to invest time and money into doing it.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well if you look at how long it's been out there then I think at this point there is no one ion the community willing or able (for whatever reason - funds, time, motivation) to work at it - so I am afraid a plea won't do much good - o/c it couldnt hurt either;)
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
I thought FreeBSD already had support for iSER? - but it seems like its only as an initiater - that sucks.

Although:

So it might be possible.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Unfortunately most of the RDMA protocols are only supported as initiator :(
NVMEoF, NFS over RDMA , all only usable as client
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Got reminded to update here that the FR has been closed by iX due to too little reciprocation = not enough ppl showed interest.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Don't worry, when TrueNAS scale is released you have debian and I am sure RDMA is just a module to load. ixsystems really dont like to do stuff that others suggest.
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
Got reminded to update here that the FR has been closed by iX due to too little reciprocation = not enough ppl showed interest.
I'm not surprised. Implementing and supporting rdma on freebsd requires commitment and iX is moving with truenas to linux as a base instead of freebsd...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
True.

Just surprising that they don't see a need for that given that its a feature that would mostly benefit enterprise customers.
But maybe TN is not actually used in a role requiring this in enterprises (i.e. not as a SAN replacement).
 

tsteine

Active Member
May 15, 2019
167
83
28
True.

Just surprising that they don't see a need for that given that its a feature that would mostly benefit enterprise customers.
But maybe TN is not actually used in a role requiring this in enterprises (i.e. not as a SAN replacement).
Frankly, No matter what you are doing, as long as you're doing anything with storage, IO, and >10gbit networking, it's a good idea to start thinking about supporting RDMA. Supporting technologies like NFS over RDMA and being able to move massive amounts of data with very little cpu intervention is just plain a good idea for any given storage appliance.
 
  • Like
Reactions: sovking and Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113

1650562057082.png

Now that explains a lot... they dont think its useful...
 

tsteine

Active Member
May 15, 2019
167
83
28
That is a complete misunderstanding of what rdma accomplishes, the point is not to access ram directly, it is to eliminate the overhead of memory copies and the TCP processing stack. It has huge CPU usage and latency benefits. This just screams not actually understanding what rdma is or does. It's shocking in fact.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
That is a complete misunderstanding of what rdma accomplishes, the point is not to access ram directly, it is to eliminate the overhead of memory copies and the TCP processing stack. It has huge CPU usage and latency benefits. This just screams not actually understanding what rdma is or does. It's shocking in fact.
To the best of my knowledge, RDMA is pointless even with the SAS12 SSDs. One has to go with low-latency NVMe SSD (or even better Optane) drives for RDMA to be useful for low latency network access to storage. Maybe a low latency all-flash storage isn't one of the current iXsystems priorities.
Plus it's not "free" in other aspects - much more complex networking setup and very limited scale-out.
 
  • Like
Reactions: T_Minus

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
But how simple is an nvme system nowadays? I mean if you're not looking at 8+ TB enterprise nvme that stuff is basically dirt cheap...

Freebsd presentation from 2018 - 4 disks only - ~5% loss over local connection.
Remote performance loss has always been *the* issue with Truenas - I really dont get it
1650563765656.png

from
 
Last edited:
  • Like
Reactions: XeonSam

tsteine

Active Member
May 15, 2019
167
83
28
To the best of my knowledge, RDMA is pointless even with the SAS12 SSDs. One has to go with low-latency NVMe SSD (or even better Optane) drives for RDMA to be useful for low latency network access to storage. Maybe a low latency all-flash storage isn't one of the current iXsystems priorities.
Plus it's not "free" in other aspects - much more complex networking setup and very limited scale-out.
Certainly, my comment above was considered more broadly than simply Truenas scale and ZFS. The benefit. and what I've been missing from Truenas with regards to RDMA, is that once you have 25/100, 50/200gbe, etc, the cost of TCP stack processing and CPU interrupts starts becoming enormous.

To pose a purely hypothetical question, what does ignoring a technology that might enable your storage appliances to accomplish the same level of performance with an 8 core cpu, vs a 32 core cpu, for a given workload, say about a storage vendor?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
  • Like
  • Haha
Reactions: efschu3 and tsteine

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
Certainly, my comment above was considered more broadly than simply Truenas scale and ZFS. The benefit. and what I've been missing from Truenas with regards to RDMA, is that once you have 25/100, 50/200gbe, etc, the cost of TCP stack processing and CPU interrupts starts becoming enormous.

To pose a purely hypothetical question, what does ignoring a technology that might enable your storage appliances to accomplish the same level of performance with an 8 core cpu, vs a 32 core cpu, for a given workload, say about a storage vendor?
I can't speak for iXsystems, I could only speculate based on my own experiences. ZFS is IMHO is clearly better suited for disk storage (vs all-flash) and to run into a limitation of even 25gig, one must have a pretty significant Truenas system. As for CPU saving, in our humble 7 nodes Nutanix (dual 25gig nics, all-flash with SAS-SSDs and 1.5TB ram each node) RDMA was discussed with experts and shot down for reasons I mentioned above, mainly latency benefits wouldn't be noticeable - ie : SAS SSD latency is higher than latency of normal TCP stack and 25gig networking wasn't fast enough to justify RDMA for CPU cycles savings where modern NICs already offload many of TCP stack processing.