Supermicro X10DRT-PIBF infiniband to ethernet

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tinfoil3d

QSFP28
May 11, 2020
903
437
63
Japan
Recently got these nodes and wanted to share a simple hint on converting these to 40gbe.
So because this system truly only has 2x1gbe and only one x16 half-height slot available for your expansion, you'd probably want to use builtin QSFP+ as ethernet. Totally makes sense.
And because there's no lspci listing for this motherboard, especially it's mellanox chip, here it is:
02:00.0 0200: 15b3:1003
This is a standard mcx3 chip 27500, aka
Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

Because of that, you can use opensource mstflint toolkit to simply edit its port from VPI to ETH with one command:
mstconfig -d 02:00.0 set LINK_TYPE_P1=ETH

That's it, cycle power and this saves you pcie slot for other things like m.2 or whatever you want there.
 

sko

Active Member
Jun 11, 2021
383
238
43
On FreeBSD they work OOTB:

Code:
# pciconf -lv | grep mlx -A3
mlx4_core0@pci0:2:0:0:  class=0x028000 rev=0x00 hdr=0x00 vendor=0x15b3 device=0x1003 subvendor=0x15b3 subdevice=0x0149
    vendor     = 'Mellanox Technologies'
    device     = 'MT27500 Family [ConnectX-3]'
    class      = network
Just load the mlx4en driver (kldload or via kld_list from rc.conf) and the mlxen interface shows up. For IB just load the mlx4ib driver. there's no need to ever fumble with the firmware...

We are currently running 6 of those nodes, all using the CX3 NIC. They BTW also play well with 40G/10G converters (e.g. https://www.fs.com/products/178060.html?attribute=77750&id=2469868) if you only have 10G links available.
 
  • Like
Reactions: tinfoil3d

tinfoil3d

QSFP28
May 11, 2020
903
437
63
Japan
Interesting i've never tried it, I grew too old I guess to experiment with these things!
Used to build kernels back in the day
 

tinfoil3d

QSFP28
May 11, 2020
903
437
63
Japan
Oh yeah and same chip should also support the supported mellanox 56gbe cable. I dont have one so never was able to verify.
 

sko

Active Member
Jun 11, 2021
383
238
43
You *could* also build the driver into the kernel (as any other kernel module driver), but why if it is that easy to dynamically load it...

The only thing that doesn't seem to work with those 40G CX3 is SR-IOV - at least they don't show up in /dev/iov (only the ix NICs), but I haven't looked into that yet. Might also be a limitation of those specific onboard NICs or their firmware...
With 'standard' 10G PCIe CX3 NICs (CX3111A, CX3121A) SR-IOV also works OOTB if enabled in BIOS; they show up in /dev/iov and you can create/manage VFs via iovctl.
 

Fallen Kell

Member
Mar 10, 2020
68
24
8
I believe SR-IOV on connectx-3 cards needs to be set via mlxconfig if your card has support. I believe there were issues with the single port cards (not sure if yours is a single port or not), as the firmware in them did not have the SR-IOV as an option. For those, you would need to flash a custom firmware to enable. There is a thread about this issue:


Unfortunately I don't have any experience with performing the fix or flash as all my connectx-3 are dual port connectx-3 PRO VPI cards (and chips).
 
Last edited: