List of NICs and their equivalent OEM parts

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Anaerin

New Member
Dec 22, 2019
8
2
3
I just grabbed a bunch of what were called "X540-T2" dual 10G cards. The PCB is branded "Inspur". pciconf on FreeBSD reports them as:
Code:
ix0@pci0:21:0:0:        class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1528 subvendor=0x8086 subdevice=0x0000
    vendor     = 'Intel Corporation'
    device     = 'Ethernet Controller 10-Gigabit X540-AT2'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 128(512) FLR RO NS
                 max read 4096
                 link x8(x8) speed 2.5(5.0) ASPM L1(L0s/L1)
    ecap 0001[100] = AER 2 0 fatal 0 non-fatal 1 corrected
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x1515
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
    ecap 000d[1d0] = ACS 1
ix1@pci0:21:0:1:        class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x1528 subvendor=0x8086 subdevice=0x0000
    vendor     = 'Intel Corporation'
    device     = 'Ethernet Controller 10-Gigabit X540-AT2'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 128(512) FLR RO NS
                 max read 4096
                 link x8(x8) speed 2.5(5.0) ASPM L1(L0s/L1)
    ecap 0001[100] = AER 2 0 fatal 0 non-fatal 1 corrected
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x1515
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
    ecap 000d[1d0] = ACS 1
They seem to be working just fine, are connecting at 10G easily, and seem to be pushing data at expected rates. So if these are "fakes", they're very good ones.
Also, one of the 3 cards I got arrived without the pushpins and springs holding the (passive) heatsink on, which had unsurprisingly come off the card in transit and was rattling around inside the plastic clamshell.IMG_20250308_111306 (Large).jpg
IMG_20250308_111327 (Large).jpg
 
  • Like
Reactions: seapoaks

tooldevops

New Member
Dec 17, 2025
5
3
3
Is there a consistent way to calculate how many packets per second (PPS) a single CPU thread can handle with a default MTU of 1500? Are there any public benchmarks for this? Or is my assumption wrong, and the reason I’m only seeing 10/12 Gbps on a 25 Gbps link—even with multiple threads—is not actually caused by the CPU? Interestingly, the issue disappears when using an MTU of 9000.
 
  • Like
Reactions: abq

tooldevops

New Member
Dec 17, 2025
5
3
3
Iperf3 in Windows?
Yes, I'm using iperf3 in Hyper-Vs on Ubuntu for now, but I'm planning to run my own TCP application written in Java.

I'm waiting for the delivery of the 'Linux servers,' which will be an Intel Core i5-12600H and an Intel Core i9-12900H (I placed orders for both). My 'workstation' is running an AMD 7600, and I'm using ConnectX-4 Lx cards across the board.
I currently have two Ubuntu Server instances running on Hyper-V. To test the ports, I used separate external switches for each to ensure isolation. At 10 Gbps, the switch confirmed that the port isolation works perfectly. However, since I don't have an SFP28 switch yet, I connected the two ports of the card directly using a DAC cable. This allowed me to verify performance without a second machine. I'm planning to use the 25 GbE bandwidth for internal network traffic.

(Since the Linux server hasn't arrived yet, I've been wondering if this limitation is a Windows-only thing, or if the CPU is bottlenecking the packet flow, or if there's something else going on.)

Unfortunately, I can't find any official documentation specifying the maximum limit for send/receive buffers on ConnectX-4 Lx cards, or whether increasing them has any positive impact on performance.

Képernyőkép 2025-12-21 124257.png
 
Last edited:

tooldevops

New Member
Dec 17, 2025
5
3
3
Has anyone compared the ConnectX-4 Lx with the ConnectX-6 Lx yet? Is there any practical difference that makes the upgrade worthwhile?
 

i386

Well-Known Member
Mar 18, 2016
4,889
1,922
113
36
Germany
Has anyone compared the ConnectX-4 Lx with the ConnectX-6 Lx yet? Is there any practical difference that makes the upgrade worthwhile?
cx-4 is pcie 3.0, cx-6 is pcie 4.0; the cx-6 could handle one 100GBE port via a x8 pcie slot (I don't know if the cx-6 nics can be crosflashed to other skus/firmware because they are pretty expensive in EU/Germany)
the cx-6 is advertised with roce advanced/zero touch roce (mellanox/nvidia "magic" to make rdma deployments easier), root of thrust + secure boot support, newer drivers + firmware

I would say for "dumb" ethernet it's not worth it, get whatever is cheaper for you
If you want to use roce and not bother with switch configuration then yes, the cx-6 is better than the cx-4
 
  • Like
Reactions: tooldevops