ESXi 6.7 & Infiniband - state

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Has anyone recently dabbled with ESXi & infiniband?
I know with the old 1.82 drivers in 5.5/6.0 it worked and since then things have gone downhill but I have not been able to get much recent information on this...

Ideally I'd want ESXi to mount nfs/iscsi shares via IB, potentially from a Solaris box or optionally linux...
 

markpower28

Active Member
Apr 9, 2013
413
104
43
IB is history for esxi, Ethernet is the only option since 6.5.

iSER has out of box driver for 6.7
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
That would be iSer based on RoCE then I assume ? Or iWARP? (primarily been using nfs so not too deep into iSCSI yet)

And if I insist on IB (due to just having bought an IB switch ;)) then I probably best pass it through to a vm and handle it there
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I'm pretty sure as of 6.7, vmware only supports RoCE. iWARP is basically dead...
 

DASHIP

New Member
May 4, 2016
15
0
1
54
Have y'all seen this VMWare article on using SR-IOV InfiniBand adapters with ESXi 7 or later?:

It's used for both cluster networking and storage, as well as vNIC's within the VM's for inter-VM networking, all at near bare metal speed. Pretty cool. With relatively cheap IB adapters and switches out there, this would make the hot setup for a lab cluster...

I also thought this article on setting up a PVRDMA cluster using 100G Infiniband with near bare metal performance (not quite as good as the above method, but pretty good) was interesting:

Here's a VMWare tech paper on the same:
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
It's used for both cluster networking and storage, as well as vNIC's within the VM's for inter-VM networking, all at near bare metal speed. Pretty cool. With relatively cheap IB adapters and switches out there, this would make the hot setup for a lab cluster...
Its *not* used for VMWare cluster/storage networking in that example, everything is within the VMs...

It would be funny if they'd bring back IB networking after removing it with "nobody wants this" and "too complicated" just a few years ago :p

O/c the enabled RoCE nowadays so thats a step forward :)
 

DASHIP

New Member
May 4, 2016
15
0
1
54
Its *not* used for VMWare cluster/storage networking in that example, everything is within the VMs...

It would be funny if they'd bring back IB networking after removing it with "nobody wants this" and "too complicated" just a few years ago :p

O/c the enabled RoCE nowadays so thats a step forward :)
I was referring to this comment in the paper:
1677257528851.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Yeah, they run a CX4 Ethernet for base connectivity and pass through the CX5 for IB use in the VMs
 

DASHIP

New Member
May 4, 2016
15
0
1
54
Yeah, they run a CX4 Ethernet for base connectivity and pass through the CX5 for IB use in the VMs
Ahhhh, I missed that, and assumed the CX4 was in IB mode... It would be nice to run vSAN across IB, especially considering the bandwidth and latency available with all-nVME SSD storage. I wonder if anyone has configured that before?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
There are no IB drivers in esxi. As mentioned u can run Ethernet/RoCE now for vSan at least:)