Infiniband gets you nothing except a packetized direct memory access interconnect known as RDMA. You need a protocol running on top of it to get storage functionality.So, I’m still trying to rap my head around infiniband and to that end, have been doing more research but am getting nowhere. From what I’ve read, infiniband can talk to infiniband targets. There have been experiments/prototypes that have used Infiniband targets to connect to a large volume of disks. I have yet to find an enterprise solution. From my understanding, Infiniband advertises itself to the operating system as a virtual NIC and a Virtual storage adapter. I guess I expected to find a solution similar to fiber channel storage allowing one to simply plug a external storage chassis into the adapter on the server instead of requiring a second computer to running Solaris or the like acting as a ISCSI target.
Am I getting anything wrong?
- iSER - iSCSI using IP for control signals and RDMA for transfers
- SRP - SCSI RDMA Protocol uses SCSI commands over RDMA
- IPoIB - IP over Infiniband and then using SMB or NFS over IP
- SMB Direct - Windows 8/2012 implementation of RDMA SMB
- NFSoIB - I think this is added to NFSv4, I don't recall exactly
- SDP - Sockets direct protocol where you can have a TCP socket opened over RDMA; more used to accelerate non storage client-server applications without redesigned them for native RDMA
I think that is most of them.
Out of all of those, using NFSv3 over IPoIB or NFSv4 on Linux/Unix or SMB3 on windows 8/2012 are going to be the simplest and most flexible options.
Last edited: