10gbe IPoIB (Infiniband) bridge

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

luckylinux

New Member
Mar 18, 2012
27
2
3
Hello all,

I was thinking, for my living room where I have quite a few servers running (silent !), to do a custom 10gbe IPoIB setup. In my lab room I could use the Voltaire 4036 (non-E version unfortunately) which I still need to find a way to silence. It's way too noisy even for the server room :(.

Anyway ... back to the topic ... I was thinking of using a few "Mellanox ConnectX-2 VPI Dual Port Networking Card MHRH2A-XSR" to inter connect these servers. Because the servers need to be connected also to the lab room, I was thinking of using the following setup:

Router/Switch:
- Proxmox VE (Debian based) virtualization platform doing switching/routing
- Intel/Dell X540 T2 uplink (yeah because unfortunately the cabling is already in place so I really needed RJ45)
- Some other mellanox dual QSFP+ cards (I have a bunch of Mellanox Hca-30024) connected to clients using QSFP+ to 4xSFP+ adapters

Other servers/clients (about 5-10 of those):
- Proxmox VE (Debian based) virtualization platform
- 1 x MHRH2A-XSR card (found them on eBay at 10$ each :rolleyes:)

Could this setup run? Every server/client will need to use IPoIB. On top of that the master router/switch will also have to either do routing or bridging (which, according to some posts here on servethehome, may be difficult/impossible to do).

Could this setup work relatively easily?
I had considered other alternatives but for one reason or the other in the end it wasn't really convenient. All of these of course without a switch ;)
- Intel/Dell X540 T1 or T2 -> expensive (~140 $ off eBay with shipping + duties + customs) but cheap cables
- Mellanox 10gbe EN ConnectX-2 SFP+ -> expensive dual ports (~ 80 $ off second hand) + expensive cables

The biggest problem in this setup is that I'm also limited in the amount of PCIe slots I have available on the motherboard. A maximum of 2 x8 PCIe3 slots are usable on each platform. Therefore "just" using a bunch of cheap 1 port SFP+ cards is not really an option once you factor in the cost for PCIe riser, extender, splitter, ...

The IB solution would only require a QSFP+ card in the master switch. A dual Mellanox 700ex2-q (Voltaire Hca-30024) would allow to directly connect up to 8 servers using 2 x QSFP+ to 2 x 4 x SFP+ adapters. I would say this is hard to beat for the price :cool:

What kind of performance may I expect on a Intel Xeon E3 1231 v3 platform (16GB / 32GB RAM)?
For me the biggest open point is if/how IPoIB can be bridged :(. Or, if routed, how much of a performance penality may I expect? I don't plan to achieve full 10gbe networking. Even say 3-4 gbe would be good;).

Thank you guys ;)
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
Mellanox PCI cards do not support QSFP - 4x SFP breakout, that's only a feature on switches. You can use a QSFP - SFP adapter to convert a port to a single SFP+ port, but that's it

With how cheap 10gbe/40gbe ethernet has gotten you won't find many people still dicking with infiniband here. Connectx2 10gbE cards are 15 dollars and 40gbE mellanox cards are $35 to $40 dollars. I would just spend $90 on an ethernet switch with 4x 10gbE ports and just use ethernet, or $200 on a switch with 16x 10gbE ports if you need more
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Hello all,
The IB solution would only require a QSFP+ card in the master switch. A dual Mellanox 700ex2-q (Voltaire Hca-30024) would allow to directly connect up to 8 servers using 2 x QSFP+ to 2 x 4 x SFP+ adapters. I would say this is hard to beat for the price :cool:
I don't think that a HCA can use a 4:1 split cable
 
  • Like
Reactions: fohdeesha

luckylinux

New Member
Mar 18, 2012
27
2
3
With how cheap 10gbe/40gbe ethernet has gotten you won't find many people still dicking with infiniband here. Connectx2 10gbE cards are 15 dollars and 40gbE mellanox cards are $35 to $40 dollars. I would just spend $90 on an ethernet switch with 4x 10gbE ports and just use ethernet, or $200 on a switch with 16x 10gbE ports if you need more
Mmm OK. I wonder which ethernet 10gbe switch exists for so cheap though.

The cheapest I know of is the Ubiquiti US-16-XG which is ~600 $ and gives you 12 sfp+ ports and 4 10gbe RJ45 ports.
Which *silent* switch would you find, even second hand, for just $90 - $200 ?

And I wonder then what to do with the Infiniband cards and Switch that I have in the other room. Still haven't started setting that up :(
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
Brocade ICX6450 - $100 on ebay, near silent, draws 25 watts, 4x 10gbE SFP+ and 24/48 1gbe copper ports. the SFP ports are also compatible with this - Mikrotik 6-Speed Rj-45 Module Up To 10Gbps Speeds

if you want to utilize an existing 10gbe over copper/rj45 connection

brocade ICX6610 - $200, 16x 10gbE ports, 2x 40gbE ports, 24/48 copper 1gbE ports. a little louder, draws around 100w, also compatible with the RJ45 SFp+ adapter

If you have the infiniband cards I'm thinking of, they are VPI meaning they can run in ethernet mode too, so you can just re-use them

signed, your local STH Brocade shill
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
200 might be a little on the low side, but you can get used Mellanox switches for relatively cheap if oyu have an eye out, saw an 6036 recently for €500 ...

Brocade ICX6610 - $200, 16x 10gbE ports, 2x 40gbE ports, 24/48 copper 1gbE ports. a little louder, draws around 100w, also compatible with the RJ45 SFp+ adapter
Nice :)
 
  • Like
Reactions: fohdeesha

luckylinux

New Member
Mar 18, 2012
27
2
3
Brocade ICX6450 - $100 on ebay, near silent, draws 25 watts, 4x 10gbE SFP+ and 24/48 1gbe copper ports. the SFP ports are also compatible with this - Mikrotik 6-Speed Rj-45 Module Up To 10Gbps Speeds

brocade ICX6610 - $200, 16x 10gbE ports, 2x 40gbE ports, 24/48 copper 1gbE ports. a little louder, draws around 100w, also compatible with the RJ45 SFp+ adapter
Wow that's cheap. Although add like 50% more for shipping and customs to switzerland :(. May be worth to buy like 5 of these in one go though :).

Too bad I already bought 3 x Intel X540 for that very purpose :(. The cheapest RJ45 to SFP+ module was like 200$ I think of FS ...



Still the Infiniband switch is not going to be very useful at this point :(.


EDIT: How noisy is the 6610 ?
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
Wow that's cheap. Although add like 50% more for shipping and customs to switzerland :(. May be worth to buy like 5 of these in one go though :).

Too bad I already bought 3 x Intel X540 for that very purpose :(. The cheapest RJ45 to SFP+ module was like 200$ I think of FS ...



Still the Infiniband switch is not going to be very useful at this point :(.


EDIT: How noisy is the 6610 ?
The SFP+ to RJ45 module I linked is only about $50 and works great, many users here use it, it's just a matter if your particular switch supports it. So far most seem too, I know the brocades do, and most dells and netgears.

It's hard to quantify the noise (I don't have a meter), maybe a little less than a dell r710. If it's the only piece of equipment in a room you'll here it, but in a rack with a server or two I don't think you'd be able to tell it's there. It definitely makes fan noise though, so if you're looking to build a totally silent setup it probably wouldn't be the best choice

EDIT: ah yes, being in switzerland certainly changes pricing :/
 

luckylinux

New Member
Mar 18, 2012
27
2
3
It's hard to quantify the noise (I don't have a meter), maybe a little less than a dell r710. If it's the only piece of equipment in a room you'll here it, but in a rack with a server or two I don't think you'd be able to tell it's there. It definitely makes fan noise though, so if you're looking to build a totally silent setup it probably wouldn't be the best choice
That's what I thought :(. This setup is for the living room and for the office. Noise has to be pretty low.
 

luckylinux

New Member
Mar 18, 2012
27
2
3
The other alternative at this point, instead of daisy chaining sfp+ cards, would be an ATX SFP+ Linux router/switch. I have some FX-8150 or FX-8350 ATX boards where I could put like 4 or 5 PCIe cards in. Trouble then becomes power consumption most likely :(. I don't have other ATX hardware that could accomodate that many PCIe cards unfortunately ...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ok, so back to the drawing board - how many boxes do you need to connect and whats the budget (if you sell/don't need the 540's)?
 

luckylinux

New Member
Mar 18, 2012
27
2
3
Well I'm currently expanding and buying lots of X10 boars and E3 v3 CPUs second hand. Quote cheap and IMHO good price/performance/power consumption ratio.

Budget: I don't know honestly. The US-16-XG didn't look bad but I thought that it didn't make much sense because I would need to purchase one for each room (I have 3 rooms: living room where the WAN router is, office room and home lab room). So 3 x 600$ would be far too much ... considering in the office I only have 3 PCs (+ the linux switch using X540 I will be installing). And given the compatibility issues the US-16-XG faces I'm not sure I want to go that road (DAC cables issues, max length 3m, ...). The infiniband solution looked good for the office but I guess that's just going to be a plaything now :(.

I need to connect at 10gbe speed (or maybe not as much, say about 3gbe :D):
- Living room: 6-12 servers
- Office room: 3 PCs
- Home lab room: 6-15 servers

The X540 will be used to connect the different rooms together as cabling is already wired and there is a patch panel at the entrance of the building. The issue is mainly for the living room because of the need for low noise.

The solution proposed above doesn't seem too bad assuming I put some fan low-noise adapters for my 80mm fans. Power consumption is about on par with second hand gear I guess, will probably need to undervolt some. Performance using linux bridging should be not higher than 3-4gbe, but I can live with that. Main thing is noise. Additionnally, using such a solution, if a part of the hardware fails it's a limited cost to replace (well I also have some spares at hand :)). If a 400$ switch fails it's another story :(. But even at 60$ for a dual 10gbe card (and lucky if I get it at that price) I need to put like 4 of those to get a 8 port switch = 240$. I already have the rest of the hardware :).

The main goal is not to achieve true 10gbe, just quite a bit more than 1gbe. 3gbe or 4gbe would be plenty in my case.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Do you need 3-4 gbe point to point or total? How about looking for a cheaper 2.5/5 GBe switch? (not sure what client side requirements exists there though, never looked into that)
 

luckylinux

New Member
Mar 18, 2012
27
2
3
Are there 2.5/5gbe Switches :eek: ? I thought it was kind of a "proprietary" protocol found in the latest "cheap" (and crappy, since they seem to fail quite early) 2.5/5.0/10gbe adapters from ASUS.

I would say I need 3-4gbe from every client to the master, but then of course the master will become the bottlenec since linux kernel cannot really bridge more than 3-4 gbps. So I guess the answer to your question is, unfortunately, "in total"
 

luckylinux

New Member
Mar 18, 2012
27
2
3
:eek:

May I ask what you are doing with that many servers? And in the living room??
For the moment just building them (only have 4 running).
It's all part of a distcc cluster to compile Gentoo for each architecture I have (especially embedded, ARM, ...):cool:

As for the "why in the living room" it's because I'm renting an apartment and cannot destroy walls to do as I want :D. So it's about equally shared between server room and living room for now. I want the bedroom free of servers ;)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The question is whether you might be able to get away with LAG/LACP to combine multiple 1G links to 3 or 4 GBe depending on your specific needs ...
 

luckylinux

New Member
Mar 18, 2012
27
2
3
The question is whether you might be able to get away with LAG/LACP to combine multiple 1G links to 3 or 4 GBe depending on your specific needs ...
I don't think so Rand_. I already thought and bought card for LACP (several quad ports Intel based) but this doesn't really scale well when only one client needs more than 1gbps (exception: maybe Samba, see below).

As far as I know:
- Samba doesn't support Multi-channel well on Linux (although this may have changed, last news I heard was about "experimental support")
- ZFS send as far as I know doesn't support multiple streams/threads so not more than 1gbps/client
- NFS also won't feature more than 1gbps/client

And Samba is really rarely used concerning 10gbe. 1gbps is OK although the problem is more the "uplink" (connection between different rooms is 1gbps at the moment and goes through the patch panel). This means that if 10 servers were backing up simultaneosly at the moment performance would be like 100mbps :S. This is why I planned on using the X540 to increase the "uplink" speed to like 4gbps ...

That being said the Mikrotik router I spoke about doesn't look too bad: 380 CHF for router + 75 CHF for the SFP+ adapter (not sure I need it since I already have the X540 that can do the uplink task). Surely not the same features as an USD-16-XG but less compatibility issues and much cheaper (380 CHF vs 600 CHF) for what I need :)
 

luckylinux

New Member
Mar 18, 2012
27
2
3
If anybody got any better idea than buying 3 Mikrotik units which would total at 1140 CHF (3 switches) + 228CHF (3 S+RJ10 adapters) which is still about 450 CHF less than the US-16-XG. Or do you believe the US-16-XG is overall a better unit?

I have about 10 devices from ubiquiti and they all perform well. What worries me with the US-16-XG is the poor SFP+ compatibility :(.