Intel XL710 Network Cards?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

abq

Active Member
May 23, 2015
675
204
43
Hello All,

Why isn't there more interest in the Intel XL710 network cards? ...Patrick wrote a promising evaluation in 2014, and there was some interest for a soft switch, but nothing compared to amount of Mellanox chatter and applications. ...Looks like very low power 10g & 40gb ethernet option. No interest because XL710 doesn't do infiniband? Ethernet switches are still too expensive?

Cheers,
ABQ
 

pyro_

Active Member
Oct 4, 2013
747
165
43
I am going to guess cost as the Mellanox cards are still considerably cheaper than the XL710 cards
 

abq

Active Member
May 23, 2015
675
204
43
@Pyro, yup, i think you are right about them being generally more expensive! I did get lucky this weekend on an Intel 2U server, and picked up an 'extra' XL710-2da for $90. I feel lucky, but was still a bit pricey. ...My real problem will be finding an economical 40gbe switch! Can start with direct connect between servers.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I have a few XL710 and X710 cards. There are probably a few reasons that there is not more interest here:
  1. There was a bug in the XL710 that caused a stop shipment and a silicon re-spin earlier this year. This caused a long time lapse where no new cards were hitting the market. I do not think we published that on the main site but it was a large issue.
  2. At 40GbE there are some natural limitations. For example, in a PCIe 3.0 x8 card you cannot push 2x 40GbE non-stop. That is also why the 100Gb generation is almost all on x16.
  3. In terms of CPU utilization, I have seen demos where cards with heavier offloads (e.g. Chelsio T580's) have significantly more offload both for standard applications and applications like NVMe over fabrics. So when you are pushing cards, the XL710's use more CPU. I ran into this real-world while doing the 40GbE QuickAssist testing. Intel likely has an interest in having people buy more expensive CPUs
With that said, I do still like them but I have been buying ConnectX-3 (Pro) EN's for generally half the price of XL710's. When I have to fund moving 40+ nodes over saving 60% on NICs is considerable.

The big reason I have been moving to 40GbE is to limit cabling nightmares. It is easier (for me) to manage the lab with 40GbE links rather than having 2-4 10GbE links. In the DemoEval lab we have so much gear going in/ out that cabling is painful. 40GbE is helping.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
For me I was hoping to be able to use the xl710-qda2 as 8 ports of 10G but the chip can only supports 4 physical ports. Having 4x 10G in one card with the X710-DA4 without needing a splitter cable is nice and I do have that in a few servers for special situations but they came out around the time when I could pick up a used Quanta LB6M for around the same price as the XL710 cards so interest waned on the soft switch side.
 

abq

Active Member
May 23, 2015
675
204
43
@Patrick, Thank You - I did not know of the bug in XL710, so wil need to use my google-fu! I thought these 710's were low power/efficient, but missed higher CPU utilization. ...I should have stuck with my mellanox 10/40/56gb cards. Also better match to PCIe 3.0 x8 (~64gbs data).

@Blinky 42, Thank You for update on splitting dual 40gb ports to 10gb links/ports, especially point intel still stuck at only supporting 4 physical ports per card. ...I still need to set up my Quanta LB6M, but can use my chelsio T440-CR card for 4x10gb soft switch in meantime.

I will have to do some Intel vs Chelsio 4x10gb card testing after the holidays;)

...Thanks All for your help on pros & cons of the intel 710 cards. I think I will go back to Mellanox & Chelsio cards for 10/40/56gb connections/links.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@abq I think they are still fine. Just cost wise more than Mellanox lately for me.
 

m.j.hutchinson

New Member
Aug 8, 2019
1
0
1
@Patrick, a little late to the party but am also seeking advice on this. How much can be pushed through a pcie 3.0 x8 card? I would guess 985MBs x8 lanes is about 7.8GBs (62.4Gbs). Are you saying that you cant push x2 simultaneous 40GbE lines through XL710 dual 40GbE or Mellanox ConnectX-3 M9Nw6? Love to get some hard and fast answers on pcie 3.0 and 40Gbs limitations. I'm looking for advise to gain as much throughput from my DELL r730xd to Cisco N3k 3064PQ-10GX (10/40 qspf+) I thought I might be able to feed it the full 62.4Gbe or x2 lines of 40GbE from the Mellanox ConnectX-3 or Intel XL710.

Any advice would be greatly appreciated!

Matthew


I have a few XL710 and X710 cards. There are probably a few reasons that there is not more interest here:
  1. There was a bug in the XL710 that caused a stop shipment and a silicon re-spin earlier this year. This caused a long time lapse where no new cards were hitting the market. I do not think we published that on the main site but it was a large issue.
  2. At 40GbE there are some natural limitations. For example, in a PCIe 3.0 x8 card you cannot push 2x 40GbE non-stop. That is also why the 100Gb generation is almost all on x16.
  3. In terms of CPU utilization, I have seen demos where cards with heavier offloads (e.g. Chelsio T580's) have significantly more offload both for standard applications and applications like NVMe over fabrics. So when you are pushing cards, the XL710's use more CPU. I ran into this real-world while doing the 40GbE QuickAssist testing. Intel likely has an interest in having people buy more expensive CPUs
With that said, I do still like them but I have been buying ConnectX-3 (Pro) EN's for generally half the price of XL710's. When I have to fund moving 40+ nodes over saving 60% on NICs is considerable.

The big reason I have been moving to 40GbE is to limit cabling nightmares. It is easier (for me) to manage the lab with 40GbE links rather than having 2-4 10GbE links. In the DemoEval lab we have so much gear going in/ out that cabling is painful. 40GbE is helping.
 
Last edited:

Tiberizzle

New Member
Mar 23, 2017
25
11
3
124
I have a few XL710 and X710 cards. There are probably a few reasons that there is not more interest here:
  1. There was a bug in the XL710 that caused a stop shipment and a silicon re-spin earlier this year. This caused a long time lapse where no new cards were hitting the market. I do not think we published that on the main site but it was a large issue.
  2. At 40GbE there are some natural limitations. For example, in a PCIe 3.0 x8 card you cannot push 2x 40GbE non-stop. That is also why the 100Gb generation is almost all on x16.
  3. In terms of CPU utilization, I have seen demos where cards with heavier offloads (e.g. Chelsio T580's) have significantly more offload both for standard applications and applications like NVMe over fabrics. So when you are pushing cards, the XL710's use more CPU. I ran into this real-world while doing the 40GbE QuickAssist testing. Intel likely has an interest in having people buy more expensive CPUs
With that said, I do still like them but I have been buying ConnectX-3 (Pro) EN's for generally half the price of XL710's. When I have to fund moving 40+ nodes over saving 60% on NICs is considerable.

The big reason I have been moving to 40GbE is to limit cabling nightmares. It is easier (for me) to manage the lab with 40GbE links rather than having 2-4 10GbE links. In the DemoEval lab we have so much gear going in/ out that cabling is painful. 40GbE is helping.
I think it's worth mentioning here that while some of the dual port 40GbE chipsets are limited in a real way by PCIe 3.0 x8 the XL710 is not. I've measured peak throughput of ~56Gbps on dual port ConnectX-3 cabled as 2*40GbE in LACP and suspect it may be capable of more than that given an ideal PCIe environment (i.e. one that minimizes overheads). In comparison the XL710 is limited to the same ~44Gbps in any port configuration (including e.g. single or dual port VF-to-VF SR-IOV, where Intel has been quick to acknowledge and explain the limitation). The XL710 dual port parts should in realistic terms be viewed as gaining failover but not throughput over the single port parts.

That said, the Intel drivers in my experience are much more mature than the Mellanox drivers. With an arbitrary distro/vanilla kernel and driver it's a lucky day if any of the major features/offloads/tightly coupled applications beyond basic single port IB/Ethernet connectivity work. With MLNX-supplied OFED, drivers, kernel and half of userspace it's usually possible to shake a couple more features out of them. I would, however be pleasantly surprised if Mellanox themselves could demo a configuration where DPDK and RoCE functioned simultaneously, reliably and without caveats with mixed PFs and VFs of a dual port adapter configured for LACP given 6 months and a blank check.

I'd personally love to use more of the (single port) XL710 and had begun to phase the Mellanox ConnectX-3 for them until the grey market price seemingly doubled earlier this year. Arbitrarily complex feature combinations are not without caveats on the XL710, but it's possible to achieve "product brief feature bingo" on kernel, driver and application combinations that have actually existed at some point and requirements less demanding than feature bingo can usually be achieved without even analyzing a kernel dump :p
 

Tiberizzle

New Member
Mar 23, 2017
25
11
3
124
hello,

I have xl710-QDA2 but it cannot connec to sw mellanox via cable mellanox. cable and sw working with card mellanox connectX2, can you help me, and it working server to server with intel cable .
Based on 'cable mellanox' I would guess possibly the supported transceiver restriction. You should see something in dmesg about ports being disabled due to unsupported transceivers if you're binding i40e. If you're binding all PFs to DPDK, it may fail to establish a link with no indication whatsoever of why..

If the issue is transceiver restriction, I believe PBA K15190 is a variation of XL710-QDA2 sold without transceiver restrictions. At least, the firmware doesn't have them.

The firmware image for that one is XL710QDA2_6p80_CFGID4p5_K15190.bin and I've used it with good success on some XL710-QDA2. You can modify nvmupdate.cfg to add the EEPID of your existing firmware image to the 'REPLACES' line of PBA K15190 stanza and remove your same EEPID from any other REPLACES lines and flash your way to freedom.

For the single port variations you'll probably have to resort to terpstra/xl710-unlocker which does work but note that you may need to modify it to account for mypoke should not assume PHY capabilities struct to be 0xC long · Issue #4 · terpstra/xl710-unlocker
 

Gio

Member
Apr 8, 2017
67
8
8
36
If the issue is transceiver restriction, I believe PBA K15190 is a variation of XL710-QDA2 sold without transceiver restrictions.
Do you know if trick works on the X710-DA2 cards to remove restrictions? Editing the magic number in EPROM didn't work for me.