Are these Voltaire infiniband switches good?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
519571 B21 HP Voltaire Infiniband 4036 36PORT 4XQDR Managed Switch VLT 30111 0884420783428 | eBay

Looking to start branching out into infiniband for my lab. Seller countered with 300 which seems somewhat reasonable to me. Question is, are these switches any good? Do they have any specific needs like proprietary hardware requirements and so on and so forth?

If this isn't any good can you guys suggest another IB switch? I wouldn't mind dropping down to DDR from QDR if it was a significant cost savings.
 
  • Like
Reactions: aij

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I have many many Mellanox and Voltaire switches, they work well for Infiniband. I only use Mellanox cards and they
are fine. Mellanox is the largest IB manufacturer so chances are high that you will use Mellanox silicon.
 
  • Like
Reactions: Patriot and Dajinn

Patriot

Moderator
Apr 18, 2011
1,451
792
113
I have many many Mellanox and Voltaire switches, they work well for Infiniband. I only use Mellanox cards and they
are fine. Mellanox is the largest IB manufacturer so chances are high that you will use Mellanox silicon.
Power usage stats?
And noise levels :) ...
I haven't even finished migrating to 10gb...
but $300 for a QDR switch is very tempting.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Noise is very loud, you are dealing with 40Gb, no way around that. Honestly, there are certain use cases for IPoIB or IB, most are around HPC or where latency really matters. We migrated off of IB for some systems since bridging 10GbE to regular GbE is so easy, as well as drivers. If the 10Gb market wasn't so cheap for cards, we would stick with IB.

If you are going to switch, these are nice if you don't mind noise. Not sure on power since they are TOR for our HPC. IB meshing is really easy, even easier than Ethernet. So no LAG or other odd settings to worry about, just plug and play basically. The hard part is tuning and whatnot. Subnet manager in switch isn't a big deal since software-based manager is easy. Basically a yum install opensm, chkconfig opensm on, service opensm start. Wham. Done.

Honestly, 10GbE is fast enough for most home applications, you'd need to push hard to get to 40GbE. Mind you if you get the right QDR cards, they can run at either 10GbE or 40GbE, so you are dual purposed. Most 40GbE cards are basically IB cards using QSFP+. That's what we are using to uplink to our Gnodal 10/40GbE switches.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
10Gbe might be fast enough but 300 bucks is a drop in the bucket for 40 Gb bandwidth. Will I ever saturate it? Probably not for at least 10-20 years. But it's nice to have the technology on hand for those said 20 years as opposed to still dealing with 1 GbE and LACP :D:p
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
I jumped on a cheap 12 port managed FDR 56Gb switch on fleabay, theres another unmanaged 18 port up for like $600ish, crazy deals as mine was barely a year old and current gen product. You can quiet this kind of gear down if you are willing to open them up and do some light modding, a 120/140/200mm fan(s) at sane ~1k rpm speeds blowing down on components with the cover removed will do the deal nicely for home use and your sanity. You can still rack it if you leave enough room between gear for this method.

They are meant for 24/7 full load racked with many other similar 1U devices inside a datacenter, so like most performance server gear the typical cooling design of multiple 40mm industrial fans running 6k rpm is counter for our home use purposes.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Does anyone know if the switch in the original post of this thread requires some kind of special serial cable? I googled this model and found a thread on this forum where someone was talking about "good luck finding a special serial cable for it". The product documentation just states it's a DB9 RS232 cable which I have and which worked fine when connecting to the Dell switch I have that only has serial console...
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Will these work with the mellanox ConnectX2-VPI and/or ConenctX3?

If so which cables are the cheapest / most reliable?

My plan would to be connect NVME Storage to ESXI Hosts -- any considerations / issues doing so?
(Right now I run NVME local for optimal performance, but this seems affordable for perf/$)
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Yes, they should work fine with any VPI card. Mellanox was originally an IB only shop, adding in the Ethernet stuff later. This, all their gear are basically dual mode (except for EN cards). The copper QSFP cables on eBay all work, most that you find will be the Mellanox branded cables. HP and Gore work. Some of the stuff from china works well. Contact the vendors and negotiate for bulk pricing, that's what I usually do if I do more than 5 or so. For the ESX stuff, get as new a card as you can. Some of the connectx-3 cards come either as 10GbE or 40GbE as the other half, just have to be careful when buying.

Latency if IB is really low...
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Thanks @Chuckleb I got Mellanox ConnectX-2 VPI Dual-Port 40GB Adapter Card HCA-30024 700Ex2-Q x4 during that deal a few months ago for <$100 total, just waiting for a switch come up to utilize the 40GB w/out requiring big $$/fiber...

This looks like it will work well, as long as it plays nice with VMWARE :)


Looks like my ConnectX3 are EN and I have 2 other ConnectX EN, will unload the ConnecTX and keep the x3 for SFP+ Fiber.

Just put in some bids for cables, and the switch :) even if I don't use it a lot at first this could be fun :D

Now to figure how to bridge IB and Ethernet -- or if I really need to if I'm just using the IB for my file server(s)... :D
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Bridging can kind of suck, I think the 4036E switches can bridge, but that may be using EoIB instead of regular Ethernet as well. This is the other reason that we switched out of IB for one of our larger systems, we didn't need the performance of the 40Gb and would actually rather have the ethernet access instead. They make devices for it, but this allowed direct access to each individual machine.

Of course if you really had gumption, the dual-port cards that are VPI can do both at once, so you can do 10GbE on one port and 40Gb IB on the other port.
 
  • Like
Reactions: T_Minus

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
Last edited:

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Thanks for that PnoT.

I wonder if there's any potential issue or conflict that needs to be noted from using HP firmware instead of Mellanox firmware? Or if Mellanox firmware can be loaded no problem?

As an aside 10GbE would be a painless way to increase bandwidth without having to mess with IPoIB or other end-of-life issue. But is there a reason 10GbE switches are so expensive?
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
You shouldn't have any issues using a HP firmware IB card connecting to a Voltaire/Mellanox switch. Also as far as I can tell, you can always flash the HP other other cards into the official Mellanox firmware. You just need to identify which card you have and grab the right ROM.

10GbE is still "relatively" new in the market and there isn't as much of that to go around I think. You're just starting to see the next generation of 40GbE and 100GbE come out so the market isn't as saturated. On the other hand, IB has been around and heavily used for 10+ years. You had IB onboard nodes and IB for the big HPC clusters due to the availability of low latency, etc.. These were what were used everywhere and the evolution of the IB world was real fast. They went 10->20->40->56 at a rate of every 1-2 years or so. This is why 2 yrs ago, ConnectX (10/20Gbps) were cheap and now the QDR stuff is cheap. In another year or two, you'll see a flood of FDR hit the market.

That's my theory at least.
 
  • Like
Reactions: Dajinn

DASHIP

New Member
May 4, 2016
15
0
1
54
I think he may have meant using HP firmware on the Voltaire switch (not the HCA). From what I can tell, HP has dropped the links to their Voltaire firmware, and you have to get it from Mellanox now, by opening a support account.

I would recommend trying to get a 4036E if possible, as it has two 10G/1G Ethernet ports that can be LAG'd together, which will give you up to 20G out of your IB fabric into your Ethernet network. I was recently able to obtain one for $575 after some negotiation.

On the HCA topic, look for the ConnectX-3 cards with dual ports and VPI, as described above. Those will let you run IB on one port and Ethernet on the other. Pay close attention to the part number, as there are a variety of capabilities. For example, I was able to obtain the MCX354A-FCBT for about $250/ea. These cards support IB up to FDR speed, and Ethernet up to 40G speeds. However, note that the physical interface is QSFP+, which is not connectable to SFP+. This makes it difficult to connect to most 10G Ethernet switches, because they all have SFP+ ports. (The reason for incompatibility is that the QSFP ports have four data lanes, while SFP+ has a single lane). However, there are breakout cables that have one QSFP male to four SFP+ male connectors, but I am unsure if the ConnectX-3 adapters are compatible and could use those to connect a single cable out of four to a 10G Ethernet switch port. Has anyone tried? Here is an example of such a cable: Mellanox QSFP to SFP+ Cable Adapter, Part ID: MAM1Q00A-QSA - Colfax Direct

Note that some Ethernet switches (typically older ones) use the CX4 connections for 10G Ethernet. CX4 has four data lanes, and there are direct connect CX4 <-> QSFP cables.
Mellanox Certified Refurbished MCC4N26C-001 Passive Copper Cable 4X CX4 To QSFP 20Gb/S 26 AWG 1M
 
Last edited:

epicurean

Active Member
Sep 29, 2014
785
80
28
Hi Chuck,
Could you give some exact pointers how to enable 10GB and 40GB on each of the dual port Mellanox x2 VPI that I have?
Is it possible to setup the 40GB ports of say 3 machines as IB without a switch, and the 10GB to a regular 1/10GB network? With all 3 machines being esxi servers?

much thanks
 

DASHIP

New Member
May 4, 2016
15
0
1
54
Here is a Mellanox document on setting the port modes of a VPI HCA:
http://www.mellanox.com/related-doc...dware _for_VPI_Operation_Application_Note.pdf

It is possible to set up three hosts to talk to each other using IB without a switch, in a "triangle" network. However, it uses two ports on each host. So, Host A has one port connected to Host B, and the other connected to Host C. Host B has one port to Host A and the other to Host C. Host C is connected to Host B and A in the same fashion. Now, all you need is to run a subnet manager on the network. There are subnet managers that run on Linux (OpenSM running on a VM in the VMWare cluster) and there is one that runs as a VIB on ESXi, but only tested to ESXi 5.5 I think. The web site is in French, so you have to translate: Infiniband@home : votre homelab à 20Gbps - Hypervisor.fr

Although these days, I have seen Voltaire 4036 40G switches with built-in subnet managers for as low as $250. However, they are loud... I have two and wouldn't mind selling one.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
a little off topic but when I connect my cables to an Oracle Infiniband 36 switch, there is no light on the server or the switch side. ...esxi 6 and xpenology on the pc sides.

what could be the problem?

anyone?
 
Last edited:

aij

Active Member
May 7, 2017
101
43
28
Although these days, I have seen Voltaire 4036 40G switches with built-in subnet managers for as low as $250. However, they are loud... I have two and wouldn't mind selling one.
How loud are these things?

I found a spec sheet, which lists "typical" power usage at 152W, though I expect that is with typical HPC usage rather than home use. No mention of noise though, nor in the user manual.

I have a chance to get one for $77 shipped, but if it's as loud as the LB4M (60 dB) that would put it outside my WAF tolarances.