Intel XL710-QDA2 - $549

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DolphinsDan

Member
Sep 17, 2013
90
6
8
Has anyone gotten one of these yet?
Intel XL710QDA2BLK PCIe 3 0 x8 Dual Port Ethernet Converged Network Adapters | eBay

2x QSFP+ 40Gb ethernet ports that can be split using breakout cables to 4x 10GbE. I'd imagine this is best in class network card right now.

I'm also thinking that for @Chuckleb's DIY switch project, it would be better to get one of these instead of getting four dual port SFP+ 10G cards.

I'd be really interested to see the day someone gets one of these and a HBA on a E3 platform to make a storage and networking appliance.

At about $50 more than the single port card, this is the obvious choice.
 

Hank C

Active Member
Jun 16, 2014
644
66
28
so this thing can connect to 8 different server with 10gb SFP+?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
no, but here at work we ordered over 3000 Mellanox connect 3 pro en dual 40 GB adapters for a lot cheaper than what that card is going for. and ROCE v2 is not simple to configure yet...

But it sure is blazingly fast.

Chris
 
  • Like
Reactions: Chuntzu

Hank C

Active Member
Jun 16, 2014
644
66
28
that's a bummer if it can't split to 8 10gb sfp+ connections to servers...
 

Scott Laird

Active Member
Aug 30, 2014
312
144
43
From reading the XL710 datasheet, I'm fairly confident that the hardware is able to support 4x10 operation (probably across both QSFP ports), but Intel's drivers don't support it.

See Intel® Ethernet Controller XL710 Datasheet page 164: Supported Media Types, where it lists "QSFP+ and breakout DA twin-ax cables or active copper cables" as supported for 10G operation. Also search for 10GBASE-CR1, which isn't precisely the same electrically speaking but shows up all over the place showing 4x10 support per QSFP.

Heck, Intel even sells QSFP/4xSFP+ breakout cables (X4DACBL3), which it lists as compatible with the XL710.

A quick skim of the Linux drivers doesn't show any obvious support, though, just like Intel's been saying.
 

DolphinsDan

Member
Sep 17, 2013
90
6
8
Isn't the 40 just 10 x4 anyway? I'd think it'd just show up as 4x10?

To me this is the big winner.

Even if it "won't work" for @Chuckleb 's project for latency, you should still be able to get decent throughput right?
 

Scott Laird

Active Member
Aug 30, 2014
312
144
43
Sort of, but not exactly. 4x10 has 4 MAC addresses, 4 sets of queues, and so forth. While 40 GbE is made up of 10 Gb lanes, it's not just a LAG of 4x10 GbE.
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
... I'd think it'd just show up as 4x10?...
DolphinsDan,

Just so it's clear I'll repeat what Intel said when I asked them about about this:
".... breakout cable will give you flexibility to connect XL710 QSFP+ to SFP+ switch with downgrade speed of 10Gbps. in the operating system, you should still see 1 Ethernet adapter if you would go with DA1 and 2 Ethernet adapter with DA2."
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Someone needs to buy one and test it.... I would love to but I have neither the funds, nor the additional hardware for testing
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
We've got to get a definitive answer to this since it keeps popping back up. I just spent the past 2 hours trying to track down information from the Intel drivers docs, switch manufacturers, etc. Here's what I have found, mind you nothing definitive on the XL710.

However you use the cable, you need device support to tell the port what to do. It's generally used between switches where you go in and configure the switch to output 1x40Gb or 4x10Gb, this is shown in the Mellanox and Arista docs. From a desktop/server adapter side, I couldn't find anything in the Linux Intel driver release notes or the Windows driver release notes except for this tiny note about line 656 regarding the older model 82599 based adapters needing the other end to be configured as 4x10Gb. Regardless, that's consistent with the switch config part from above. The last interesting article was from another blog where it showed two cards, one with a QSFP+ and the other with the 4x10SFP+ connectors, implying that you can connect the two that way. While I believe that to be true, you would need to configure the NIC to bond the 4 channels on one end so I assume that driver support would allow you to define it as a 40Gbps or 4x10Gbps, but there are no notes in any drivers.

The card is limited to 40Gbps according to the controller datasheet (pg 91). The bus can't support more than that.

Lastly the most useful thing that I can find in the 1700+ page document was this line from 6.2.23.3 (pg 506):
"Note: The 10GBASE-CR1 configuration is not an IEEE standard; however, this configuration could be enabled with switches supporting CR4 to obtain 4 x 10 Gb/s over QSFP+ module."

So I'm calling it that you cannot use a breakout to connect to 4 (or 8) servers. They are used as uplink cables or bonding cables in a sense between switches, or else breakout from a 40Gbps on a switch to 4 servers. Heck, I don't even know if you can do a QSFP+ to two switches with 2xSFP+ connectors (not sure why).

References:
http://www.mellanox.com/related-docs/user_manuals/1U_HW_UM_SX10XX_SX1X00.pdf
http://downloadmirror.intel.com/21642/eng/readme.txt
New 40GbE 'Fortville' can load-balance network traffic across CPU cores - FierceEnterpriseCommunications
http://www.arista.com/assets/data/pdf/40G_FAQ.pdf
http://www.intel.com/content/dam/ww...tasheets/xl710-10-40-controller-datasheet.pdf
 

danielmayer

New Member
May 5, 2015
3
1
3
49
I support the request: the Intel-specs are kind of confusing. I contacted Supermicro via FB to get an answer an they said: Sure, 8x10Gbit!
May've been the marketing-team, though....
At least the manual and datasheet of the AOC-S40G-i2Q states compatibility with "Fiber-optic cables (with required optional transceivers)", Intel seems to be more restrictive.
 

Attachments

danielmayer

New Member
May 5, 2015
3
1
3
49
So now, I got a definitive "answer" after looking again for recent drivers:
Intel supports only 4 connections, even on 2x40GB.
With the actual qsfp-config-utility (V20) either on single port cards it's possible to set:
• 1x40 to enable a single QSFP+ port in 40 Gb/s mode.
• 2x40 to enable dual QSFP+ ports in 40 Gb/s mode.
• 4x10 for using a single QSFP+ port and breakout cable (connection to four 10 Gb/s SFP+ link partners).
• 2x2x10 for using dual QSFP+ ports with breakout cables (connection to two 10 Gb/s SFP+ link partners for each QSFP+ port).

So no 8x10Gbit so far.
 

Attachments

Lance Joseph

Member
Oct 5, 2014
82
40
18
You guys are right on time. I've been curious about this and I'd just started setting up a test bed at work today.
One Mellanox ConnectX-3 dual 40G card connected by a breakout cable to a couple of systems with Intel X540-DA2 cards.
My hope was to create an Interface Group in pfSense and to do some testing between the 10G cards with iperf.
I'm particularly interested in finding out how the processor behaves when doing NIC performance tests.
I may either switch from a Mellanox to a Solarflare SFN7142Q or order a sample of the XL710 cards.
I'll follow up in this thread next week when I get a chance to resume testing.
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
I'd imagine this is best in class network card right now.
I know I know late to the party, but the Chelsio T5 series of cards are the "best in class" in my opinion. Amongst ubiquitous driver support, an OEM choice for HPC routing platforms, a built in packet switch, you are able to get the starter model "T580-SO-CR" for around $400. They have several specific T5 variants, with 1 even aimed at low latency.
 
  • Like
Reactions: WeekendWarrior