40Gbs networking for 4-node cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sovking

Member
Jun 2, 2011
84
9
8
Hi guys
I need some advice about 40Gbps networking.
I have a small cluster composed by 3 EPYC 2 ESXi servers acting as compute nodes with some NVME disk on board and 1 storage node with hybrid storage (cpu here is E5 2680 v2).

The VMs of the computantions nodes need to exchanges frequently a bunch of data with low latency and sometimes also to dinamically move VMs from one node to the others.

At beginning I started thinking about connecting directly them with nic with double ports at 25Gbps Ethernet, but this technology is quite new, so is not cheap. From what I see, probably with less money I can buy used 40gpbs nic. But here it seems there are multiple choices.

The first one is choosing if 1) connect directly at 40 gbps the 3 servers given that two port per nic allow this, and add another nic at 10gbs for the storage node (or simple use the esisting 1gbs link of the motherboard) or 2) use a switch with 40 gbps to connect all 4 servers an maybe more (of course this will cost more).

The second choice is about which nic to use. Personal preference is for Mellanox ones because of their large compatibility with ESXi, FreeBSD and Linux. But there are several models and several OEM (like the HP ones) available. In your opinion which models should I look at?

Thanks!
 

Fallen Kell

Member
Mar 10, 2020
57
23
8
I have had lots of good luck with the HP branded connectx-3 VPI cards. Please note that most cards can not use both 40gbe links at the same time at full bandwidth. The dual ports are meant for link failover/redundancy not additional bandwidth (simple math will show that the PCIe bus can not push the data to the cards with the number of lanes/level of PCIe protocol).

As such, you will really need to look at a small switch with QSFP+ ports. There are lots of options, but I am not an expert. Personally I have a brocade ICX6610 but this only has 2 QSFP+ ports that can be used as 40gbe links (the other 2 ports are limited to using a 4x10gbe breakout connection). I think the bigger 6650 has 4 ports and won't break the bank (but it is end of life but fine in lab situations in most cases).
 
Last edited:

sovking

Member
Jun 2, 2011
84
9
8
I'm looking at these cards MCX354A-FCBT and the HP version 649281-B21 with 1m DAC cables like MC2207130-001 (should be FDR cables to obtain 40/56Gbps Ethernet), I don't know if there are other OEM versions available.
Some questions:
  • looking on ebay I see several proposal: all of them are genuine or there could be a chance to get a fake card ?
  • I see different hardware revisions: A2, A5, A7... there are some significative difference ?
  • If the card I could get is HP version, may I flash Mellanox firmware over it ?
 

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
I'm looking at these cards MCX354A-FCBT and the HP version 649281-B21 with 1m DAC cables like MC2207130-001 (should be FDR cables to obtain 40/56Gbps Ethernet), I don't know if there are other OEM versions available.
Some questions:
  • looking on ebay I see several proposal: all of them are genuine or there could be a chance to get a fake card ?
  • I see different hardware revisions: A2, A5, A7... there are some significative difference ?
  • If the card I could get is HP version, may I flash Mellanox firmware over it ?
This thread probably has most of your answers, I've indexed into the flashing part but you should read the whole thread regarding card revisions and flashing...
 

sovking

Member
Jun 2, 2011
84
9
8
Thanks, that's is very interesting. By the way in this moment I'm not find cheap deals for QCBT model in Europe ... their price their price is quite similar to FCBT model.

Any suggestion for the switch I should use to better exploit them ?