As a single user is infiniband worth the money?

Dajinn

Active Member
Jun 2, 2015
512
78
28
32
I'm just now treading into the waters of network attached storage and all of these new connectors and networking protocols I didn't know existed now...exist...to me, lol.

Anyway, I was attracted to the prospect of a multi-node VM host with seperate SM847s for storage, however, I'll likely be using them as network attached storage.

Really, as far as heavy usage is concerned, the most reading from the storage I'll probably be actively doing when not testing things out in VMs will be Plex usage for me and a few family members. That's it.

That being said, I don't want to run into a bottleneck if I decide I want to do some more heavy lifting.

Also, if I go with a 4 node supermicro server, I'm going to be limited per node to 1 low-profile pci-e device. I want to avoid having to buy all these different network adapters when I want to upgrade. I was considering a few options

-Quad Gigabit LP Nic & Teaming them for 4 Gbps over Ethernet
-10 GbE NIC
-DDR Infiniband Adapters ( DDR Switches are relatively cheap compared to QDR Counterparts)

Thoughts?
 
  • Like
Reactions: T_Minus

TeeJayHoward

Active Member
Feb 12, 2013
376
111
43
I played with Infiniband and fiber both. In the end, I like fiber. Infiniband's foreign to me. It doesn't play by the rules I grew up with. So my Infiniband network is just 10Gb IPoIB these days. Also, IB really shines with RDMA. So if you do decide to do Inifinband, make sure your entire setup works with RDMA. For me, this meant no ESXi. And only Server OSs for Windows systems.

tl;dr: If I could do it all over again, I'd take the money I spent on my 40Gb IB network and instead spend it on a 10GbE network.
 

markpower28

Active Member
Apr 9, 2013
415
103
43
I played with Infiniband and fiber both. In the end, I like fiber. Infiniband's foreign to me. It doesn't play by the rules I grew up with. So my Infiniband network is just 10Gb IPoIB these days. Also, IB really shines with RDMA. So if you do decide to do Inifinband, make sure your entire setup works with RDMA. For me, this meant no ESXi. And only Server OSs for Windows systems.

tl;dr: If I could do it all over again, I'd take the money I spent on my 40Gb IB network and instead spend it on a 10GbE network.
esxi does support two type of RDMA. SRP and iSER.
 
  • Like
Reactions: ehorn

dswartz

Active Member
Jul 14, 2011
592
75
28
Depends on what IB adapters you have. I got a couple of connectx-3 EN adapters. They are 10gbe/40-gbe out of the box. ESXi supports them as such. And ESXi presents iSER storage adapters for me. I can't yet take advantage of that because my storage backend (SCST based) doesn't yet support iSER.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,394
1,831
113
CA
What's the cheapest 40Gb switch with like 4 or 8 ports you can buy? Are they still big $$ ??
 

dswartz

Active Member
Jul 14, 2011
592
75
28
I haven't seen anything with less than 8 ports. Leastways not on ebay, which is the only source of affordable HW :)
 
  • Like
Reactions: T_Minus

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
If you are looking at needing > 4 ports it is hard to beat infiniband. Switches (DDR) are quite a bit cheaper.

RDMA is amazing, IPoIB is a pig IME in both CPU usage and overhead. CX4 to QSFP cables can also be extremely expensive.

I started with IB and moved to pure ethernet but have fewer nodes then you are planning. 10GBe Ethernet is just cleaner/easier. Especially when integrating with your existing network (would need to bridge from ethernet to IPoIB which is slow and doesn't always work).

Do you have any adapters now? You may find that round-robin iSCSI is more then enough for what you are doing.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
32
Nah, I don't have any adapters now. I actually don't really have anything now, I'm just trying to save myself future pains. Like, I think I might just get the barebones SS6026TT-HDTRF from Mr Rackables because each node can support two full size PCIe cards and I want that expandability if I choose to either use IB or a quad NIC or something. I also wanted to play around with a Dell J23 as well which will require me to get an external HBA card so I need more expandability than what a 2U-4node server can offer me.

Don't DDR IB ports support link aggregation? At least if I went with a cheaper IB option I could squeeze 20 Gbps out of 2 ports per node, and then just attach those to a DDR switch that my storage servers would also be attached to using 2 DDR ports.
 

dswartz

Active Member
Jul 14, 2011
592
75
28
I'm running 10gbe but using connectx-3 EN cards in point to point mode. Haven't needed a switch yet, thankfully...
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,394
1,831
113
CA
I'm running 10gbE too, and have switch, but was hoping to do 40Gb between ESXI hosts and data stores.
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
Don't DDR IB ports support link aggregation? At least if I went with a cheaper IB option I could squeeze 20 Gbps out of 2 ports per node, and then just attach those to a DDR switch that my storage servers would also be attached to using 2 DDR ports.
No. DDR is already 20Gbps (RDMA) / 16Gbps IPoIB (max but usually lower). You may be able to bond at a higher level but anything ethernet related needs eIPoIB which is not widely supported. This is also what makes bridging a PITA. Generally you will just use a single link (or 2 for redundancy). You can do things like round robin.