Needed: 10GbE mezzanine NICs for Dell C6220 II

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

danb35

Member
Nov 25, 2017
34
4
8
44
The tl;dr is pretty much in the thread title: I'm trying to find three or four two-port NICs that will connect to the mezzanine slot on the nodes of my C6220 II, and communicate via Ethernet at not less than 10 Gbit/sec with my existing SFP+ infrastructure, be supported by Debian 11 (specifically, Proxmox), and include (or have available) the appropriate mounting bracket.

Some background: I'm using three nodes of a Dell PowerEdge C6220 II as a Proxmox cluster, complete with a Ceph pool on SATA SSDs on each node. Each node has a Chelsio T420-CR card with one port for the main network, and the second for a dedicated Ceph network. The SATA SSDs are becoming performance-limiting, so I'm wanting to replace them with NVMe SSDs. That means installing a card to connect the NVMe SSDs, and that means I need to do something else for the NICs. The obvious solution, IMO, is to use a mezzanine card for this.

Dell listed an Intel dual 10 GbE, SFP+ NIC for this server, and there are plenty to be had on eBay at reasonable prices. I bought a couple that came with brackets, only to realize yesterday morning when I was trying to install one that it was the wrong bracket. And now that I have a better idea of what the right bracket would look like, I don't see any (on eBay or otherwise) that have it.

So then some poking around on eBay led me to these:

...and I'm interested--I don't need 40 Gbit/sec speeds, but they're available, they're cheap, they have the right bracket, and I can buy QSFP+ optics from fs.com (not as inexpensively as SFP+ optics, but they're still available). So, three NICs, six optics, a handful of fiber patch cables (which I already have), a suitable 40G switch (the wallet is wincing at this point), and I can patch that in to my existing 10G infrastructure somehow, I think. I'm thinking a suitable switch (8+ QSFP+, plus at least one SFP+) sounds like an expensive proposition, though, even used.

So a little more research leads me to the idea of QSFP+-to-SFP+ adapters--plug one into the NIC, plug a SFP+ optic into it, and Bob's your proverbial uncle. But then I read of a question as to whether these NICs would support such an adapter. And I also see some discussion suggesting some Mellanox cards don't support Ethernet, and I'm not able to find any definitive specs on these particular NICs to be able to tell whether these are among them.

So I'm seeing three ways to get where I need to go, but I'm seeing problems (or at least unknowns that I don't know how to address) with each of them. Any help narrowing them down would be appreciated.
 

danb35

Member
Nov 25, 2017
34
4
8
44
I ordered one of the Mellanox cards, and it arrived today. It has a model number which wasn't visible in the seller's photos: MCQH29-XFR. Searching that led me to a manual:

That manual indicates that the card will do Ethernet at 10 Gbit/sec, which sounds like a good indication. Windows 10 sees the card:
1652392630216.png
I have a QSFP+-to-SFP+ adapter on the way which should be here Monday. More to follow.
 

jabuzzard

Member
Mar 22, 2021
45
18
8
The tl;dr is pretty much in the thread title: I'm trying to find three or four two-port NICs that will connect to the mezzanine slot on the nodes of my C6220 II, and communicate via Ethernet at not less than 10 Gbit/sec with my existing SFP+ infrastructure, be supported by Debian 11 (specifically, Proxmox), and include (or have available) the appropriate mounting bracket.
It's probably a bit late but having at the start of the year gone through the process of upgrading an entire rack full of C6220's with mezzanine slot Intel X520-DA2 based cards then I can provide some information. This was an upgrade of our teaching HPC cluster to switch it from using Truescale Infiniband for storage to 10Gbps Ethernet. Long story but it was originally part of a cluster that used Lustre for storage and this was provided over Infiniband. When the main cluster switched to GPFS we moved the storage to 10Gbps Ethernet, and bodged the teaching cluster to still use Infiniband. However, the Truescale Infiniband cards are not supported in RHEL8 it was either ditch the teaching cluster or upgrade it to 10Gbps Ethernet. It was done with mezzanine cards to keep the teaching cluster homogenous as the first 16 nodes are connected to C410x PCIe expansion chassis with HIC cards that take up the PCIe expansion slot and have also been upgraded with some cheap refurbished NVIDIA P100's so the undergraduates can do machine learning projects. Linux these days gives you different device names for an X520-DA2 as a mezzanine card from a PCIe slot card which makes the deployment messy. Hence the desire to do it all as mezzanine cards.

Anyway, these are the Dell part numbers. I would note that the brackets are different between the C6100 and C6220, but the mezzanine cards and risers work in either.

X53DF is the dual port Intel X520-DA2 10Gbps mezzanine card
TCK99 is another part number for a dual port Intel X520 mezzanine card. I think it's older from a C6100 but would work
C2G78 is the correct bracket to hold an X53DF/TCK99 with cutouts for the SFP+ cages.
HH4P1 is the mezzanine riser

I have a bunch of spare HH4P1 risers due to buying a bunch of X53DF cards with C2G78 brackets that said they didn't come with the risers, except when they arrived they had them, so the separately purchased risers are now surplus to requirements. Note that there is a small metal stand off that goes at the back of the card that you may or may not get. Frankly, I don't think it is essential as the front of the card is held firmly by the main bracket, the riser card supports the length of the card, and nothing is working loose IMHO. The TCK99 part is I think from a C6100 and has a small amount of onboard flash. However I have never seen one in real life.

There are Mellaxonx ConnectX-3 mezzanine cards that could potentially be persuaded to to 40Gbps Ethernet. I have no idea if they actually can

9FVFH ConnectX-3 single port mezzanine card
3CYRK ConnectX-3 dual port mezzanine card

Note the bracket from the ConnectX-3 cards is identical to that used with the Qlogic/Intel Truescale cards. At a pinch, they will also work with the X53DF cards as the QSFP+ holes are the same height only a bit wider. I had a couple configured like that while I was sourcing extra brackets. There is also a dual port ConnectX-2 card part number XXM5F which I think are from a C6100. However I don't think these can be persuaded to do Ethernet. I have spare brackets for the Infiniband cards and some of the Truescale Infiniband cards too. I should have some ConnectX-3 somewhere too.

Anyway this is the only source of the cards and brackets and brackets left that I am aware of anywhere in the world.


The web page says it does not include the mezzanine riser card JKM5M, which is what fooled me, as they then came with HH4P1 risers. I think the JKM5M is from the C6320. I have a bunch of those too.

The final point to note and something I have not been able to resolve yet is updating the firmware on the cards. The Intel bootutile64e program tells me that I have a wide range of firmware versions on the cards that I sourced, but no update from the Dell website for either a C6100, C6220, C6220II or C6320 works. They all say the update is incompatible with my system. It all works but my OCD side would like to be able to upgrade them to be all the same. Apart from anything else we like to keep the firmware levels in our cluster the same across all the same node types so we can be sure any issues are not down to differing firmware levels.
 
Last edited:
  • Like
Reactions: danb35