Not sure whether you were reacting to my project, and this is a little bit of a late reply, but I'm happy with the little infiniband network I'm running. I'm only using it as access to my network drives at the moment, but I am running games off of m.2's in my server which deliver data at around 2Gb/sec, which is more than adequate and 20x as fast as the 1gbe can do. I have a iscsi drive that looks local to my windows VM which I have for gaming.
The main upside of infiniband is the rdma I think. CPU load on the iscsi provider is negligible, which is mainly really nice when you want to move storage away from the client machine.
I'm vaguely considering moving the a whole windows install to a iscsi setup, but that would only really start to be interesting if I wanted to boot the whole VM on multiple machines. A solution that's looking for a problem at the moment, other than that it'd scratch my "it can be done so it must be done" itch. I will probably do it that way when I move to new hardware. At the moment the limiting factor is that there isn't enough pcie space for sufficient m.2 storage in my server for all my needs if I want full bandwidth for each drive.
I'm finally getting around to trying to get my 40gbe set up. Also not terribly neccesary or anything it keeps me off the street.