Gaming from RDMA storage?

solon

Member
Apr 1, 2021
41
2
8
Hey all,

I have been considering getting some EOL qfsp+ gear as my current 1gbe solution annoys me when copying large files around my home network. In the course of my investigations I was wondering whether anyone regularly games over nfs/rdma in a satisfactory way?

As is customary for me, my idea is suffering a bit of scope creep and I found myself investigating some pcie cards with nvme slots and 2Tb drives in the effort of coming up with a solution that might be able to utilize most of a 40gb link. In that train of thought I got to wondering whether I might be able to avoid copying the large files altogether by moving steam libraries onto remote rdma drives.

My home server that would accomodate the gear is a Supermicro X10DRH-iT with two Xeon 2650L-V3's. It's currently running Ubuntu but recent releases have been doing things that I don't consider improvements so I will probably go back to debian soon. It currently has 32GB RAM, more soon, as I'm also considering getting a second internet connection and loadbalancing them with a pfsense VM. Current work at home with lots of videoconfencing for both me and my better half is interfering with downloads a little too often to be acceptable.

If someone is gaming from a remote rdma drive, I'd love to hear about their experiences.
 

i386

Well-Known Member
Mar 18, 2016
2,631
759
113
32
Germany
I used to have my entire steam library on my nas and it worked fine without rdma.
For Origin I had to use iscsi because origin works only with "local" devices.
 

Rand__

Well-Known Member
Mar 6, 2014
5,573
1,216
113
problem with non rdma traffic might be his CPU that might limit single threaded performance...
 

jerryxlol

Member
Nov 27, 2016
32
5
8
29
As i386 says, i have been using for steam library on NAS windows 10 share over rdma and worked without a problem too.

Loading times from 10drives of 3tb array was little bit slower than nowadays nvme but who would anyway waste 2TB nvme on games ?
NAS CPU i3-6100 and when gaming, no cpu load was on the cpu of the NAS and nics mellanox connectx-3 10gbe.

I would recommend that.
 

solon

Member
Apr 1, 2021
41
2
8
Yeah me too, I'd waste nvme on games without a second thought. Caught myself considering looking for bits to make a 6x2tb raidz2 to run OS's off of yesterday. Managed to stop myself though, hah! So there.

It really isn't all that much about economy to me. I'm not looking for the "minimum viable solution". I'm finding 1gbit annoying, and looking at 10gbe over rj45 I came to the conclusion that 40gbit over nvme would be alot more interesting to play with, probably exceed 10gbe over rj45 speeds, and make rdma storage possible with in the future possibly remotely running almost everything. It just seems like alot more potential to keep me occupied with interesting things that just going for a boring plug and play 10gbe over cat7 solution or the like.

@Rand___ As far as CPU usage, I take it this would this be a concern for the switching traffic? It's not likely to keep my ip over infiniband under 1gbit is it?

I'm obviously not an expert, but the rdma stuff should put minimal load on the CPU shouldn't it? That's the reason for RDMA's existence according to what I've been reading. Also, It's not a disaster if I have to run parallel rj45 1gbit, though obviously I'd prefer not to.

But what I'm gathering is that none of the responders have done this, but it's very likely to work fine seeing as solutions that ought to cause more CPU load and have less bandwidth work too.
 

Rand__

Well-Known Member
Mar 6, 2014
5,573
1,216
113
I would assume it should work.
Cpu load was wrt to NVME based storage without RDMA usage as (single access) nvme performance scales with single thread cpu performance
 

kapone

Well-Known Member
May 23, 2015
1,045
615
113
I'm obviously not an expert, but the rdma stuff should put minimal load on the CPU shouldn't it?
RDMA by itself, would add very little load on the CPU. But serving files at 40gbe... :) That is not trivial at all, when it comes to the CPU.

I run a dual/replicated SAN (Starwind vSAN) for my business, with the SAN connectivity at 40gbe. The SAN is configured for RDMA as well (iSER via ESXi). Each SAN node has a Supermicro X9SRL-F motherboard, E5-1620 v2, 128GB of RAM and the associated networking and storage bits. The NIC is a Mellanox CX3 Pro with dual 40gbe ports. The servers run Windows Server 2016 natively, with Starwind vSAN as the iSCSI layer.

Windows server 2016 supports RDMA natively over SMB3 and Starwind vSAN supports it over iSCSI (iSER). The E5-1620 v2 is a 4C/8T CPU @3.7GHz and turbos to 3.9 (I think)

Forgetting Starwind for a sec, serving files/data at 40gbe just from Windows, consumes ~75% of this CPU and this is a fast CPU. The underlying storage is all flash, so the storage is not the bottleneck. Filling a 40gbe pipe with data just takes a lot of CPU.

Something to think about.
 

solon

Member
Apr 1, 2021
41
2
8
I see your point. Thing is, it obviously isn't a disaster if I can't achieve the full speed, at this point, if I can match 10gbe speed it will have checked all the boxes that make the project worthwhile for me. Still, the 2650L-V3 Xeons are 12C/24T. They only turbo to 2.5Ghz and obviously only as long as it stays under 65W of thermals. It's going to be very rare that I even have 2 users simulateously making any substantial demand on the storage array.

Looking at some v2 vs v3 xeon benchmarks a 75% of a 3.9ghz v2 should be within the realm of possibility for 100% of a 2.5Ghz v3, even non turbo'd at 1.8Ghz I'd say it shouldn't be too much of a performance hit.

The general impression I'm getting is that I may hit some limits, but it's exceedingly unlikely that it won't be able to exceed 10Gbe performance, which is enough to make me happy for now. Time to find some Chinese comrades with the neccesary bits and figure out how I'm going to get a massive fibre plug through the floor. Oh, and modify a switch to be mostly silent...
 

solon

Member
Apr 1, 2021
41
2
8
Do you mean you weren't able to match that on a 40gbit rdma link with ipoid or over a regular 10gbe over rj45 link?

Also, I'm not sure I understand, I take it this was while copying the game "payday 2"? 10gbe while playing can't possibly be relevant can it?
 

i386

Well-Known Member
Mar 18, 2016
2,631
759
113
32
Germany
Local storage :D

Steam patching/verfiying the game and playing it. I used perfmon (not the resourcemonitor!) for monitoring io.
 

solon

Member
Apr 1, 2021
41
2
8
You have to wonder where the bottleneck in that is though. More likely to be software than hardware, maybe.