100Gb for the home... ConnectX-4 vs. -5 vs. Onmi-path?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

McKajVah

Member
Nov 14, 2011
59
10
8
Norway
Yes I know it's totally overkill, but sometimes...

I'm looking at buying a Mikrotik 100Gb switch and some single 100Gb PCI-E cards together with those ultra cheap $5 Intel QSFP 100Gb adapters. I'm planning on using them in ethernet mode.

Mellanox ConnectX-4: $89
Mellanox ConnectX-5: $150
Intel Omni-Path 100Gb: $39

Does that Intel card really work? Not much information on them... Or should I just go for the Mellanox cards?
Any real benefits of going for ConnectX-5 vs. ConnectX-4?

Thanks.
Kaj
 

bitbckt

will google compiler errors for scotch
Feb 22, 2022
213
134
43
Don't use omni-path. That way lies homelab madness.

For a single port NIC, ConnectX-4 (max. PCI-e 3.0 x16) is fine.

ConnectX-5 introduced the PCI-e 4.0 x16 cards that are necessary to saturate dual 100Gb ports.
 
  • Like
Reactions: NerdAshes and xyvyx

nexox

Well-Known Member
May 3, 2023
658
265
63
I, too, have been scoping out an upgrade to 100G when I can get my hands on a CRS510-8XS-2XQ, and after much scrolling through eBay I have also ended up at the $89 ConnectX-4, though my machines are all stuck at pci-e 3.0, maybe the situation looks different if you have 4.0.
 

i386

Well-Known Member
Mar 18, 2016
4,242
1,546
113
34
Germany
Stay away from omni-path: it's a proprietary version of infiniband from intel without support for ethernet. It's also a dead end because intel stopped all the developing for it.

The cx-4 are great and the single port oem versions are cheapish. Nvidia removed support for them in their oefd distros since v3.10.
 

ano

Well-Known Member
Nov 7, 2022
650
269
63
where did you find cx4 for $89? or cx5 that cheap?

you need cx6, to get PCI 4.0 btw.

forget omnipath
 

bitbckt

will google compiler errors for scotch
Feb 22, 2022
213
134
43
Yup. CX-5 En has 4.0 SKUs. I use MCX516A-CDAT, specifically.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I run 100G with a Celestica Seastone DX010 and a mix of CX4s and HP-branded QLogic Fastlinq 45000s, and a bunch of 10G stuff.

Both have worked well, CX4s are way more common so that's what I'd recommend. Sometimes they arrive from eBay in InfiniBand mode, and there is a simple command-line tool to change the protocol to Ethernet. They work with those cheap Intel transceivers.

The CX5 can handle a higher packet rate, can saturate the dual-port cards, and has more offload capabilities like NVMe. None of those are likely useful for a homelab unless you've got a specific reason. There is this comparison chart on STH: https://www.servethehome.com/mellan...5-and-connectx-6-ethernet-comparison-chart-1/

100G may be overkill for most, but with modern SSD storage and FTTH it's easy to saturate 10G - And the price difference between 25/40/50/100G is usually small.
 
  • Like
Reactions: ano and mach3.2

Docop

Member
Jul 19, 2016
41
0
6
45
wasn't it a bit problematic at some point as the nic card itself do heat quite a lot and the intel qsfp even more.. ? I look to upgrade, as currently i'm on 25g and it's quite animic and quite the bottleneck.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
The CWDM4s do heat up and the cards absolutely require airflow. I run desktop cases for my servers, and I put a 140mm fan above my PCIe card bank set at a low (silent) RPM and that works perfectly. So you need air movement, but not that much. Larger fans running slow will always be quieter.

I created a post about this a little while ago when my PCIe fan failed and killed a transceiver: https://forums.servethehome.com/ind...r-kaiam-100g-cwdm4-sm-2xlc.39747/#post-373428

I also run MM SR4 transceivers (Arista) and those are cooler, which gives more thermal headroom for handling non-ideal conditions.

The faster link speeds do make more heat. I've noticed this when running 10G SFPs at 1G, the reduction in heat output is really noticeable in a cage without fans.
 
  • Like
Reactions: mach3.2

TRACKER

Active Member
Jan 14, 2019
180
55
28
I use mine with Z820s, so airflow is very good, but CWDM4s definitely are getting quite hot during operation (50-55C)
 

McKajVah

Member
Nov 14, 2011
59
10
8
Norway
Thanks for all your great answers and advice. I went for the MCX455A-ECAT single QSFP cards. Should be plenty for my needs.
 

kathampy

New Member
Oct 25, 2017
17
11
3
The ConnectX-4 cards are no longer supported and the latest Windows driver won't even recognize them - you have to use an older driver. They also idle at 85 C and shut down without airflow. I replaced them with Intel E810-CQDA2 cards as I found a good deal. I would get at least ConnectX-5 cards going forward.
 

klui

Well-Known Member
Feb 3, 2019
835
457
63
The ConnectX-4 cards are no longer supported and the latest Windows driver won't even recognize them - you have to use an older driver. They also idle at 85 C and shut down without airflow. I replaced them with Intel E810-CQDA2 cards as I found a good deal. I would get at least ConnectX-5 cards going forward.
Per @i386 but not completely correct. WinOF-2 still supports the PCIe x8 CX4 Lx cards: 10, 25, 40, 50G only; no 1, 56, 100G support.

Scroll to the bottom at


This page shows the supported adapters