Mellanox Infiniband (20 Gbps) HBA for 90 bucks

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cactus

Moderator
Jan 25, 2011
830
75
28
CA
So, I’m still trying to rap my head around infiniband and to that end, have been doing more research but am getting nowhere. From what I’ve read, infiniband can talk to infiniband targets. There have been experiments/prototypes that have used Infiniband targets to connect to a large volume of disks. I have yet to find an enterprise solution. From my understanding, Infiniband advertises itself to the operating system as a virtual NIC and a Virtual storage adapter. I guess I expected to find a solution similar to fiber channel storage allowing one to simply plug a external storage chassis into the adapter on the server instead of requiring a second computer to running Solaris or the like acting as a ISCSI target.
Am I getting anything wrong?
Infiniband gets you nothing except a packetized direct memory access interconnect known as RDMA. You need a protocol running on top of it to get storage functionality.

  • iSER - iSCSI using IP for control signals and RDMA for transfers
  • SRP - SCSI RDMA Protocol uses SCSI commands over RDMA
  • IPoIB - IP over Infiniband and then using SMB or NFS over IP
  • SMB Direct - Windows 8/2012 implementation of RDMA SMB
  • NFSoIB - I think this is added to NFSv4, I don't recall exactly
  • SDP - Sockets direct protocol where you can have a TCP socket opened over RDMA; more used to accelerate non storage client-server applications without redesigned them for native RDMA

I think that is most of them.

Out of all of those, using NFSv3 over IPoIB or NFSv4 on Linux/Unix or SMB3 on windows 8/2012 are going to be the simplest and most flexible options.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
So, I’m still trying to rap my head around infiniband and to that end, have been doing more research but am getting nowhere. From what I’ve read, infiniband can talk to infiniband targets. There have been experiments/prototypes that have used Infiniband targets to connect to a large volume of disks. I have yet to find an enterprise solution. From my understanding, Infiniband advertises itself to the operating system as a virtual NIC and a Virtual storage adapter. I guess I expected to find a solution similar to fiber channel storage allowing one to simply plug a external storage chassis into the adapter on the server instead of requiring a second computer to running Solaris or the like acting as a ISCSI target.
Am I getting anything wrong?
For most STH readers, the ports on an Infiniband card, when used with the appropriate IPoIB driver and a bit of software called a subnet manager, look and work just like Ethernet ports, except that they are very fast and you plug them into an Infiniband switch instead of an Ethernet switch. The new ports appear in your list of network devices, you configure them just like Ethernet ports, and they work just like Ethernet ports. The driver software acts as a translator between Ethernet and Infiniband so that you can ignore the details. The IPoIB driver "talks" Ethernet to the operating system while actually running as Infiniband under the covers.

With Infiniband pretending to be Ethernet via the IPoIB driver, most people just use standard IP-based protocols to talk from a client machine to a storage server: NFS, SMB, and so on. These are familiar, easy to use, and very fast if done properly. Of course if you have a very special need for features not available via this simple setup or are a serious speed freak and have a great tolerance for complexity, you can then think about using some of the faster and more exotic protocols instead of IPoIB - cactus listed most of them.

That is pretty much all that you need to know in order to get started. Once you are up and running, it might make sense to learn more, but then again maybe not.
 
Last edited:

alan

New Member
Oct 24, 2013
20
0
0
Infiniband gets you nothing except a packetized direct memory access interconnect known as RDMA. You need a protocol running on top of it to get storage functionality.

  • iSER - iSCSI using IP for control signals and RDMA for transfers
  • SRP - SCSI RDMA Protocol uses SCSI commands over RDMA
  • IPoIB - IP over Infiniband and then using SMB or NFS over IP
  • SMB Direct - Windows 8/2012 implementation of RDMA SMB
  • NFSoIB - I think this is added to NFSv4, I don't recall exactly
  • SDP - Sockets direct protocol where you can have a TCP socket opened over RDMA; more used to accelerate non storage client-server applications without redesigned them for native RDMA

I think that is most of them.

Out of all of those, using NFSv3 over IPoIB or NFSv4 on Linux/Unix or SMB3 on windows 8/2012 are going to be the simplest and most flexible options.
how do you set up SDP on Centos 6? I want to use it for my php/mysql connection.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
how do you set up SDP on Centos 6? I want to use it for my php/mysql connection.
That's a pretty general question, so you'll want to start here:

https://www.openfabrics.org/images/docs/LinkedDocs/Installing_OFED_on_Linux_R1.pdf

Although I don't think that CentOS 6 is yet officially supported, RHEL6 is, so I'd just follow those instructions.

Remember that SDP uses IPoIB for addressing, so that must also be configured. You'll also need to confirm that you have a subnet manager running somewhere.
 
Last edited: