Recommendations for ESX7-TrueNAS infiniband RDMA?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Agrikk

Member
Sep 6, 2013
48
5
8
Oakland, CA
Now that I’ve built a pair of fully-solid state ISCSI targets using TrueNAS core I’d like to upgrade my network to provide the lowest latency from these boxes to my 3-node ESX 7.03 cluster and I’m looking for recommendations.

PCI slots are at a premium on my boxes so I’m thinking a dual-port HBA per server and a managed switch. I have a mental model of ~$50 per nic and ~$300 for the switch but that is flexible.

I’d like to go RDMA over infiniband but I might consider RoCE if a good argument was made.



My current setup is a Force10 S4810 10g SFP+ switch and ConnectX-3 dual port SFP+ nics with one port for VM traffic and one port for iSCSI traffic. Right now I’m toying with the idea of a standalone RDMA network for iScSI traffic and preserving the 10g over fiber network for user traffic.

Does anyone know of a parts list and/or guide to make this happen?
 

TRACKER

Active Member
Jan 14, 2019
293
127
43
Truenas Scale support RDMA for iscsi (iSER) but not the infiniband, just regular ethernet based :)
 

efschu3

Active Member
Mar 11, 2019
194
79
28
Actual TNS is a debian, so everything you can do with a debian you can do with TNS.

But not the clicky'webfrontend-way. Need to do it by cli.
 
  • Like
Reactions: TRACKER