Intel E810: Available breakout modes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,345
820
113
As this information is kinda difficult to find, compiled here.

Intel E810 NICs support port breakout mode (XL710 also do). The only NICs that I know that support this. To configure the ports, you need the Intel EPCT tool.

The following breakout modes are available on E810-CQDA2 (Proprietary Driver v1.13.7 + Firmware version 4.40 + DDP
1.3.35.0 + Kernel 6.5.11-7-pve, newest version as of January 17th 2024)

Code:
root@pve1:~# ./EPCT/Linux_x64/epct64e -devices
Ethernet Port Configuration Tool
EPCT version: v1.40.05.05
Copyright 2019 - 2023 Intel Corporation.

NIC Seg:Bus:Fun   Ven-Dev   Connector Ports Speed    Quads  Lanes per PF
=== ============= ========= ========= ===== ======== ====== ============
 1) 000:007:00-01 8086-1592 QSFP      2     100 Gbps Dual   4

All actions succeeded.
root@pve1:~# ./EPCT/Linux_x64/epct64e -nic 1 -get
Ethernet Port Configuration Tool
EPCT version: v1.40.05.05
Copyright 2019 - 2023 Intel Corporation.

Available Port Options:
==========================================================================
        Port                             Quad 0           Quad 1         
Option  Option (Gbps)                    L0  L1  L2  L3   L4  L5  L6  L7 
======= =============================    ================ ================
Active  2x1x100                       -> 100   -   -   -  100   -   -   -
        2x50                          ->  50   -  50   -    -   -   -   -
        4x25                          ->  25  25  25  25    -   -   -   -
        2x2x25                        ->  25  25   -   -   25  25   -   -
        8x10                          ->  10  10  10  10   10  10  10  10
        100                           -> 100   -   -   -    -   -   -   -

All actions succeeded.
The 2x25+2x10-2x10 and 1x10+1x25+1x10+1x25-2x10 options intel mentions on one document don't seem to be available.


TODO: Test XL710
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
It can only do up to four 25 Gb/s links but it can do eight 10 Gb/s links? That's bizarre.
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,345
820
113
It can only do up to four 25 Gb/s links but it can do eight 10 Gb/s links? That's bizarre.
Yep. The limit is the total available bandwidth, which is 1x100G (except in special 2x100G mode).

The currently unavailable "2x25+2x10-2x10" & "1x10+1x25+1x10+1x25-2x10" both sum to 90GBps

If you read the datasheet, you will realise that the normal E810-CQDA2 (which has a single E810-CAM2 chip) is limited to 100GBps throughput in total. That's why the E810-2CQDA2 or Silicom P488CG2I81L exist, which have two chips (E810-CAM1) on it for 200GBps total. They require an x8x8 bifurcated port.
 
  • Like
Reactions: ericloewe

NaCl

Member
Dec 15, 2018
40
3
8
Yep. The limit is the total available bandwidth, which is 1x100G (except in special 2x100G mode).

The currently unavailable "2x25+2x10-2x10" & "1x10+1x25+1x10+1x25-2x10" both sum to 90GBps

If you read the datasheet, you will realise that the normal E810-CQDA2 (which has a single E810-CAM2 chip) is limited to 100GBps throughput in total. That's why the E810-2CQDA2 or Silicom P488CG2I81L exist, which have two chips (E810-CAM1) on it for 200GBps total. They require an x8x8 bifurcated port.
For the -CQDA2 variant is that total bandwidth dynamic in all modes or cap'd at each port regardless of the presence/link state of the other cage's module? For instance if the config is set for 2x1x100 and there's only 1 100G module (other cage is empty) is the maximum throughput going to be 50Gbit?

And in a similar vein, if only half the PCIe lanes are available 8 instead of 16, I assume 50Gbit will be the upper bounds regardless of my first question. I guess the nuance would be does it halve available bandwidth? E.g. x8 slot w/50Gb max bw in 2x1x100 does each port get pinned at 25Gb even w/100Gb optics? Or dynamic and either port just shares the total, in the x8 case, 50Gb of available bw as connections demand?
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,345
820
113
For instance if the config is set for 2x1x100 and there's only 1 100G module (other cage is empty) is the maximum throughput going to be 50Gbit?
Nope, you'll be able to get 100G (see report here https://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_NIC_performance_report.pdf - Dual Port NIC with only one port connected, but clearly > 50G throughput possible)

You need to differentiate between two things:

Maximum Total Available Bandwidth for the ports (in breakout mode): This is the sum of the linked port speeds (NOT processing speed), which is always 100GBit or less, except in 2x100G mode.

Actual performance of the NIC: That's a difficult topic. Intel clearly states in the datasheet that the E810 will only do 100GBps.

Sadly, Intel does not publish DPDK performance reports for 2x100 mode (as Broadcom and Mellanox do).

As an example, Broadcom P2100G (2x 100G card) it will NOT do line rate (200G) for 2x100G linked and packet sizes below 512byte (2x100G 64byte: 35.8% max, 128 byte: 63.32% max, 256 byte: 91.33% max).
In Single Port 1x100G mode, the Broadcom card will do 100% line rate at packet sizes above 64 byte.

In Single Port (2x100G mode but only 1x100G connected), Intel E810 does line rate also at packet sizes above 64 byte (78.2% for 64 byte)


And in a similar vein, if only half the PCIe lanes are available 8 instead of 16, I assume 50Gbit will be the upper bounds regardless of my first question. I guess the nuance would be does it halve available bandwidth? E.g. x8 slot w/50Gb max bw in 2x1x100 does each port get pinned at 25Gb even w/100Gb optics? Or dynamic and either port just shares the total, in the x8 case, 50Gb of available bw as connections demand?
Nope.
The PCIe connection and the ports are fully decoupled, the NIC will just push as much through the PCIe port as it can.


On another note:
I strongly recommend using Mellanox ConnectX-6 Dx instead of Intel E810 or Broadcom P2100G. They are just technologically superior in all aspects if you don't absolutely need the special breakout stuff.
 

NaCl

Member
Dec 15, 2018
40
3
8
Nope, you'll be able to get 100G (see report here https://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_NIC_performance_report.pdf - Dual Port NIC with only one port connected, but clearly > 50G throughput possible)

You need to differentiate between two things:

Maximum Total Available Bandwidth for the ports (in breakout mode): This is the sum of the linked port speeds (NOT processing speed), which is always 100GBit or less, except in 2x100G mode.

Actual performance of the NIC: That's a difficult topic. Intel clearly states in the datasheet that the E810 will only do 100GBps.

Sadly, Intel does not publish DPDK performance reports for 2x100 mode (as Broadcom and Mellanox do).

As an example, Broadcom P2100G (2x 100G card) it will NOT do line rate (200G) for 2x100G linked and packet sizes below 512byte (2x100G 64byte: 35.8% max, 128 byte: 63.32% max, 256 byte: 91.33% max).
In Single Port 1x100G mode, the Broadcom card will do 100% line rate at packet sizes above 64 byte.

In Single Port (2x100G mode but only 1x100G connected), Intel E810 does line rate also at packet sizes above 64 byte (78.2% for 64 byte)



Nope.
The PCIe connection and the ports are fully decoupled, the NIC will just push as much through the PCIe port as it can.


On another note:
I strongly recommend using Mellanox ConnectX-6 Dx instead of Intel E810 or Broadcom P2100G. They are just technologically superior in all aspects if you don't absolutely need the special breakout stuff.
Thanks for the clarification! Also wrt ConnectX-6 how's the heat? I've tended to stay away from the Mellanox boards due to posts indicating they run blazing hot even w/decent airflow. Has this improved or does it continue to be an issue?

The larger issue, for me, would be module lock-in/out? It would seem _super_ out of character for NVIDIA to not try to sell the customer's kidneys if they could get an extra $.10USD out of them. Hard to imagine them having any largesse wrt any modules but their own being recognized.

The benefit of the E810 cards is that they can be had for the "mere mortals" pricing of about $350USD-ish, are "open optic", and run cooler than their x550 line. The ConnectX-6 cards are about $1K each (off eBay) and then there are the module costs or qsfp56 to qsfp28 adapters at the very least.

Perhaps I'm just short-sighted but for my home infrastructure (LC OM5 and cat 7) I don't have deep enough pockets to be big baller enough for a ConnectX-6 retrofit. :)
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,345
820
113
Thanks for the clarification! Also wrt ConnectX-6 how's the heat? I've tended to stay away from the Mellanox boards due to posts indicating they run blazing hot even w/decent airflow. Has this improved or does it continue to be an issue?
Power Consumption of ConnectX-6 and E810-CQDA2 is roughly the same (ConnectX-6 need 2.5W more)

The larger issue, for me, would be module lock-in/out? It would seem _super_ out of character for NVIDIA to not try to sell the customer's kidneys if they could get an extra $.10USD out of them. Hard to imagine them having any largesse wrt any modules but their own being recognized.
No module lockout on Ethernet (it's a bit different for Infiniband), just some oddities (like the NIC refuses to run 100G transceivers at 40G if they don't have the Mellanox OUI coded)

The benefit of the E810 cards is that they can be had for the "mere mortals" pricing of about $350USD-ish, are "open optic", and run cooler than their x550 line.
CX623106A is about $400 on ebay

then there are the module costs or qsfp56 to qsfp28 adapters at the very least.
No, you don't need any qsfp56 to qsfp28 adapters (they don't even exist, actually). They are mechanically and electrically compatible (like QSFP28 and QSFP+)
 

NaCl

Member
Dec 15, 2018
40
3
8
Power Consumption of ConnectX-6 and E810-CQDA2 is roughly the same (ConnectX-6 need 2.5W more)

No module lockout on Ethernet (it's a bit different for Infiniband), just some oddities (like the NIC refuses to run 100G transceivers at 40G if they don't have the Mellanox OUI coded)
Very nice.

CX623106A is about $400 on ebay
Interesting! I was looking for MCX623106AN-CDAT.

No, you don't need any qsfp56 to qsfp28 adapters (they don't even exist, actually). They are mechanically and electrically compatible (like QSFP28 and QSFP+)
I could have sworn I saw an adapter on fs.com in a sanity check before posting that. Given that I can't reproduce that, must be an eyesight issue on my part. Apologies.

I guess I'll have to look into getting a couple.
 
  • Like
Reactions: NablaSquaredG