Intel Omni-Path in the homelab for direct connect

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DieBlub

Member
Dec 11, 2017
32
15
8
Hi guys,
I've seen some relatively cheap Intel 100Gbps Omni-path adapters ( 100HFA016LS ) on ebay recently and was wondering if someone has dealt with those in a homelab setting, as I haven't been able to find much info on them apart from HPC/datacenter stuff. I'm mostly interested if they can be used as a cheap, high-speed Ethernet alternative and as a direct connect between two storage servers. So basically IPoIB (?) kinda? Are there any gotchas I should be aware of? Am I missing something that might turn out to be highly expensive in the end or am I good to go for playing around with these as a relatively noobish person? Eager to hear your thoughts, Cheers!
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
I don't know if they'll do point to point, and am pretty sure you'll need OmniPath switches if they don't. I can't recall if the OP adapters will work in any server or also require OP equipped Xeons in Intel only boards.
 

DieBlub

Member
Dec 11, 2017
32
15
8
Hey @LodeRunner, thanks for chiming in. I have only briefly taken a look at omni path documentation, but it looks like direct connect works as long as one host acts as a fabric manager. Then there is OPA VNIC which is supposed to allow Ethernet over Omni path fabric and bridging to other Ethernet networks? To be fair I'm not sure I can make sense of it all given my very limited networking knowledge - that's why I asked if anyone here has experience playing with these cards. Also afaik OP is kind of a non-open standard clone of IB, and since IPoIB does work but is massively bottle-necked by CPU I'm not sure if this might be the case here as well. So I'm wondering if these cards aren't nice paperweights outside of an actually massive OP fabric with switches - usable but ultimately pointless. Also regarding the Xeons - they made some with OP fabric built in (Knights Landing and Scalable F series). Other than that I think Intel lists Xeon e5 v3s and later as able to use these add-in cards. I know that the tech is basically dead, as Intel seems to have mothballed OP v2 since Mellanox pushed IB to 200GB+. Nevertheless, it's always interesting to find a second life outside of a typical use case for "old" enterprise hardware.