Which is better 100G nic? Mellanox ConnectX-4 or Intel E810

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stephan

Well-Known Member
Apr 21, 2017
1,085
845
113
Germany
but the switch is infiniband right? so what you mean no infiniband?
my main point was just your use-case, just curious of use-cases of infiniband instead of regular ethernet
The switch can do infiniband and ethernet and VPI (auto), same with the card. But I only use ethernet. The cable I mentioned is FDR 56G aka fourteen data rate which originally I suppose was designed for infiniband. But ethernet can also be run on the port at that speed with auto-negotiation off. 14*4 = 56 although I have trouble pushing a looped to itself CX3 over 45 Gbps using four threads.
 

uberguru

Active Member
Jun 7, 2013
467
32
28
@MountainBofh @Maddox @Stephan


ok now i installed the mellanox cx-5 card in dell R730xd and the card is not showing up
i connected the 40G DAC cable and i can see light blinking meaning it recognizes but dell server is not showing this card at all

the only card showing up is the onboard NIC shown in the screenshot below
the PCIe mellanox card is not showing up

what do i do to fix this?


1730012361915.png
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
I cannot help with the Dell, but that leads to the question to those who have tested the Intel e810

Mellanox supports IB and Ethernet. If you only need Ethernet and RDMA ex for SMB direct, would the Intel e810 not be the easier to setup choice? I ask because the 4port e810 (4 x 25G) with 4 DAC connections would be a nice choice for a small high performance SMB workgroup with a ksmbd or Windows SMB direct Server and 4 clients.
 

hmw

Well-Known Member
Apr 29, 2019
650
271
63
The dual port cx-5 skus can be cross flashed with the cx-5 ex firmware which enables pcie 4.0 support :D
Any links for this? So one just flashes the -EDAT firmware onto an -ECAT SKU? Is there zero difference between the two versions?
 
Last edited:

hmw

Well-Known Member
Apr 29, 2019
650
271
63

MountainBofh

Beating my users into submission
Mar 9, 2024
395
289
63
I've seen the thread but do you have actual successful examples of CX5 to CX5-ex flashing? @i386 said it does work but would still like to read more about CX5 specifically before risking a $175 NIC ;)
I don't have any examples of CX5's being flashed myself.
 

blunden

Well-Known Member
Nov 29, 2019
975
314
63
I've seen the thread but do you have actual successful examples of CX5 to CX5-ex flashing? @i386 said it does work but would still like to read more about CX5 specifically before risking a $175 NIC ;)
I'm pretty sure I've seen examples of that on here. I don't know in what thread though. You can probably find it if you search the forum. :)

EDIT: Took 10 seconds to find. :D

 
  • Like
Reactions: hmw

Drew442_Drew

New Member
Jan 2, 2021
10
1
3
some notes:

-e810
clearly marketed as basic connectivity. don't expect high performance
rdma is disabled with epct port configs, i e. 4x25G mode
pcie slot bifurcation is required in some configs
no rocev1 compatibility, fine if you have no existing v1 deployed
no mention of smbdirect in datasheets but plenty of performance complaints online, so it apparently works.

-cx4
no mention of rocev2 on the datasheets. it does support ets and pfc and rocev2 reportedly works
nothing like intel epct, but apparently the inverse is possible, i.e. 100g card into 4x25g switch ports, possibly only on mellanox switches

i try not to use intel (any of them), they're not meant for high performance applications...
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
some notes:

-e810
rdma is disabled with epct port configs, i e. 4x25G mode
.
Are you sure
This is critical as the 4 x 25G Quadport e810 is the interesting one for a small high performance workgroup ex with SMB Direct/RDMA in a switchless setup with DAC cables ex for 4/8k video editing with 4 Windows workstations.

I looked at the Intel e810 specs and found the following regarding port numbers

Code:
Note: RDMA is not supported when the E810 is configured for >4-port operation.
LAN and RDMA traffic can be handled only if the EMP code runs.

High Port Count RDMA Control: Enable/Disable alternative configuration request for 5 ports or higher.
Low port count RDMA Control:  Enable/Disable alternative configuration request for 4 ports or lower.

There are two sets of firmware loaded for the E810. The first is for the Embedded Management
Processor (EMP). This firmware is first to load and is required for all E810 deployments. It is optionally
followed by loading Protocol Engine firmware required for RDMA operation.
This indicates that 4 x 25G is ok for RDMA, 8 x 10 is not

see also Mellanox vs Intel
 
Last edited:
  • Like
Reactions: nexox

Drew442_Drew

New Member
Jan 2, 2021
10
1
3
Are you sure
This is critical as the 4 x 25G Quadport e810 is the interesting one for a small high performance workgroup ex with SMB Direct/RDMA in a switchless setup with DAC cables ex for 4/8k video editing with 4 Windows workstations.

I looked at the Intel e810 specs and found the following regarding port numbers

Code:
Note: RDMA is not supported when the E810 is configured for >4-port operation.
LAN and RDMA traffic can be handled only if the EMP code runs.

High Port Count RDMA Control: Enable/Disable alternative configuration request for 5 ports or higher.
Low port count RDMA Control:  Enable/Disable alternative configuration request for 4 ports or lower.

There are two sets of firmware loaded for the E810. The first is for the Embedded Management
Processor (EMP). This firmware is first to load and is required for all E810 deployments. It is optionally
followed by loading Protocol Engine firmware required for RDMA operation.
This indicates that 4 x 25G is ok for RDMA, 8 x 10 is not

see also Mellanox vs Intel
Am I sure? No I'm not. I haven't used the 2x100G cards, my only E810 cards are the 4x25G SFP28 variants. I'm going off my recent research for a new storage cluster build.

4x 25G seems like you may be correct, I don't recall seeing what you found. I was looking at new hardware and the CX6 Lx, even though it was more expensive, seemed like a better proposition to me.

Given the use case, I'd take a look at the host chaining feature of the CX cards. I think the higher single channel throughput could benefit the use case but you'll add some availability caveats. The physical location of the machines, user restart/shutdown permissions, and cluster controlled updates would mostly solve that though. There's a lot to be said about the 4x25G simplification though, but there's probably caveats in that topology we haven't thought of.

I just realised the old token ring topology we left behind decades ago is essentially back as host chaining. lol

p.s. I'm currently running a 3 host ceph cluster with cx3 40g 2port card which is somewhat a cross between the 4x25G topology and the host chaining topology (hosts route traffic to other hosts, each host can connect to every other host at L2).
 

lavalake

New Member
May 4, 2024
7
2
3
I have a lot of CX454A-ECAT. Solid card.
But mostly I run the CX354A. I've bought tons of them for $10. They work well with random el cheapo DACs on fleabay.

If you don't know how to make any CX354A *dual port* into 40GbE, here's what I derived from reading a lot of posts here:

Essential points:
1. Recommend using Windows 10 to do the flashing. Windows 11 may work, but not guaranteed.
2. Whether QCBT or FCBT CX354A dual-port variants, you can use this firmware:
2A Do not use on single-port variants.
2B. Requires WinMFT 4.22.1.406: https://www.mellanox.com/downloads/MFT/WinMFT_x64_4_22_1_406.exe
2BA. Later versions of WinMFT will return the error "no results found" or similar with the command: "mst status"
2BB. During install, install WinMFT, OEM Package, MTUSB, and SDK (all options) to hard drive.
2C. Recommend WinOF driver 5.50.53000 for Windows 10: https://www.mellanox.com/downloads/WinOF/MLNX_VPI_WinOF-5_50_53000_All_Win2019_x64.exe
2CA. Later WinOF drivers may work now, but were not tested.
2D. Copy this firmware bin to "Program Files/Mellanox/WinMFT folder": (attached)
2DA. It starts out as a zip file. But you'll need to unzip it to get the CX3-2_42_5032.bin file out.
2E. Admin cmd to Program Files/Mellanox/WinMFT and run: mst status
2EA. Confirm the existence of "mt4099_pci_cr0".
2EB. If it's not there, confirm in devmgmt.msc that the Mellanox device is not still on the Microsoft included driver. Driver provider should be "Mellox Technologies Ltd" on the Driver tab.
2EC. Try shutting down, cold booting, and re-attempting after you've installed the driver if you don't see the mt4099_pci_cr0.
3. Run the following command to flash to 40 gigabit.
3A. flint -d mt4099_pci_cr0 -i .\CX3-2_42_5032.bin -allow_psid_change burn
3B. When it asks, "Do you want to continue?" press y and then press Enter.
3C. Message will appear that FS2 FW image is being flashed without signatures. This takes a couple of minutes.
3D. Two "OK" messages should be seen.
3E. Shut down, and cold-boot.
4. Admin cmd to Program Files/Mellanox/WinMFT
4A. Run the following command to swap both ports to Ethernet mode rather than Infiniband Mode:
4AA. mlxconfig -d mt4099_pci_cr0 set LINK_TYPE_P1=ETH LINK_TYPE_P2=ETH
4AB. When asked, "Apply new configuration?" press y then press Enter.
4AC. Shut down, and cold boot.
5. You should now have two 40 gigabit ethernet ports. Ensure your switch port is set to 40 gigabit speed, and connect your QSFP28 DAC / SMF CWDM4 / etc.
6 After finished, you can update to the latest driver: https://www.mellanox.com/downloads/WinOF/MLNX_VPI_WinOF-5_50_54000_All_win2019_x64.exe
6A. You'll be informed a VPI upgrade will be performed. Click OK.
6B. Afterward, shutdown and cold boot.

My apologies it's not perfect.

I really like the dual ports on the CX354A. I use a combination of an 80 gig LACP channel and non-bonded 80 gig. The reason is that you want servers to have 4 vanilla interfaces or ports to get the maximum SMB multichannel speed for a single user and transfer. LAG algorithms and LACP don't perform as well. But LACP bonds are better for remote VM traffic like Sunshine/Moonlight/VNC/RDP.
 

Attachments

Last edited:
  • Like
Reactions: richardm

bugacha

Active Member
Sep 21, 2024
461
137
43
E810 is better when you need to run a router as modern BSD fully supports intel ddp that allowing parallel packet processing pipelines.

By the looks of it E810 now supports Roce v2 so I assume it can do RDMA over Ethernet in same way as Connect4-x Lx

Connect4-x Lx is cheap tho, $40-50 for dual 25G
 
Last edited:
  • Like
Reactions: pimposh

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,846
1,222
113
The E810 is an okay card. It has its strengths, but also its weaknesses.
E810 is not backwards compatible with 40GBase, for example.

Generally, I would say that the E810 is better than the ConnectX-4 - simply because it’s newer and has some nice features.

However, for general purpose applications, I would rank
ConnectX-6 > ConnectX-5 > E810 > ConnectX-4
 
  • Like
Reactions: pimposh