Homebrew 10GbE switch

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I have a four of the Gnodals, they are a bit loud for home use, though that is relative...

Yeah the high end cards would be nice but are out of the budget model for this. But this discussion is interesting since I didn't even know the Hot Lava cards existed and a single card could solve many problems.
 

vikingboy

New Member
Jun 17, 2014
29
6
3
I have a small need for connecting one workstation to a fileserver by 10gig to speed up some video production work I do at home. I tried to add a Intel x520 to my Intel 2758f Rangeley board running pfSense 2.2. Bridging the two NICs I managed to see close to wire speed and could max out my fileservers disks at circa 500MB/s.

This is my FreeNAS box flat out.....



and this is with some iperf testing



The above showed one core at 60% utilisation. Actually routing packets where they head across the PCI bus see's a drop in perf to around 1.5gbps...I figured you can either go fast, cheap or home. Hope this is helpful.
Edit to add: I recall that under pfSense 2.1.5 I maxed at around 6gbps
Another edit: the reason for the out traffic being 9.1gbs vs 9.9gbs is because the NIC is in a thunderbolt enclosure which has a monitor daisy chained off the back, and hence some bandwidth is allocated for that.
 
Last edited:

JustinH

Active Member
Jan 21, 2015
124
76
28
47
Singapore
Might be interesting to try out openvswitch. Should be less overhead than something like pfsense or standard linux bridges. (No routing/firewall overhead) and it can support vlans etc.

This post goes into some technical details Accelerating Open vSwitch to “Ludicrous Speed” | Network Heresy

There also seems to be a effort with intel called DPDK that offloads a lot of the traffic to hardware, and is supported on some intel cards already.

I wish I had some hardware to try this out, in fact I might go searching on ebay for something to try this out.
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
Can you use breakout cables to get 4 to 1? I bought cables (and still have them) but was not able to get Mellanox cards to break them out. They are designed for the switch side... to break out a 40Gb port into 4x 10Gb. I don't think you get 4 Ethernet MAC addresses basically.
A quick update from Intel. Chuckleb is correct, you can't do a 4X10GbE breakout on either of the XL710 QSFP cards and get 4 ethernet interfaces. Here's their response:

".... breakout cable will give you flexibility to connect XL710 QSFP+ to SFP+ switch with downgrade speed of 10Gbps. in the operating system, you should still see 1 Ethernet adapter if you would go with DA1 and 2 Ethernet adapter with DA2."
 
Last edited:
  • Like
Reactions: capn_pineapple

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
I have a small need for connecting one workstation to a fileserver by 10gig to speed up some video production work I do at home. I tried to add a Intel x520 to my Intel 2758f Rangeley board running pfSense 2.2. Bridging the two NICs I managed to see close to wire speed and could max out my fileservers disks at circa 500MB/s.

This is my FreeNAS box flat out.....
...

The above showed one core at 60% utilisation. Actually routing packets where they head across the PCI bus see's a drop in perf to around 1.5gbps...I figured you can either go fast, cheap or home. Hope this is helpful.
Edit to add: I recall that under pfSense 2.1.5 I maxed at around 6gbps
Very impressive for such a low power processor. For comparisons sake I am seeing around 25% utilization on a E3-1230v1 in linux. Really a shame that the Avoton/Rangeley platform is reduced to 15 PCI-e lanes ( 16 less one for the BMC), would make for an impressive little 4port switching platform if it had 2 x8s.
 

vikingboy

New Member
Jun 17, 2014
29
6
3
Very impressive for such a low power processor. For comparisons sake I am seeing around 25% utilization on a E3-1230v1 in linux. Really a shame that the Avoton/Rangeley platform is reduced to 15 PCI-e lanes ( 16 less one for the BMC), would make for an impressive little 4port switching platform if it had 2 x8s.
Completely agree, 2 8x slots would be really useful. The A1SRM board has a 4x slot which I use for an addition i350 quad port card and the x8 slot for the x520. It has the benefit of using 'proper' RAM too. Seems everyone misses it and goes for the smaller A1SRi for some reason.
 

Rain

Active Member
May 13, 2013
268
105
43
Before I threw the ConenctX-2 cards I purchased into my server equipment in a direct-attach fashion, I experimented with bridging as well. I noticed significantly more latency than I think a "real" 10GbE switch would add between the connections. I honestly thought it would be better than it was. I didn't really take a lot of time to tweak it though; it did run at full 10GbE speeds just fine!

I don't have a 10GbE switch to test with, but a comparison between bridge latency and added latency with a known-good switch would be interesting!
 
Last edited:

something_easy

New Member
Feb 25, 2014
7
4
3
Funny that you mention this, I have spent the last week looking into this as I'm on the cusp of buying an XL710 to go with a spare i350 to build a virtual Switch/Firewall/Router. Apparently things are about to get a whole lot better now that DPDK is taking off. Also the pfsense guys are supposedly doing the stuff with the Intel QAT to add another layer of acceleration so that firewalling can occur at 10Gbe. AND then Intel has also introduced CAT(cache allocation tech), which is supposed to kill latency issues. So if you have a v3 cpu you could have the pcie lanes for 2 cards, plus QAT+CAT for a pretty quick device.

Also throwing in some of the latest good links as well on where VMware/Xen are at regarding DPDK for 10/40GBe
DPDK Summit - 08 Sept 2014 - VMware and Intel - Using DPDK In A Virtu…
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Ar…
XPDS14 - Xen as High-Performance NFV Platform - Jun Nakajima, Intel
XPDS14: Network Throughput Improvements in XenServer - Zoltan Kiss, C…
Intel® ONP for Servers | 01.org
 
  • Like
Reactions: Chuckleb

something_easy

New Member
Feb 25, 2014
7
4
3
Also for anyone curious this is Intel's document for detailing which processors are CAT & QAT capable, they are the grey ones on page 4. The 2618L V3, seems to strike the best balance out of all of them as far as clock speed, price and cores.

http://www.intel.com/content/dam/ww...2600-v3-communications-chipset-89xx-brief.pdf

Only problem I see is figuring out what boards would come with the QAT chipset integrated, and then what version. The Rangeley board listed above doesn't seem to have any info as to which QAT chip its using if someone wants to take the initiative to find out;). Finally if you are looking through ARK for these feature don't bother, for whatever reason these features aren't deemed important enough to list separately(As afar as I could see):mad:.
 
  • Like
Reactions: NeverDie

Lance Joseph

Member
Oct 5, 2014
81
34
18
So I've seen this discussion a few times now and I've thought about it myself. Let's ask the questions and see if we can get some testing/thoughts.
...
Don't know if I have time to build this to test, but would love to get the discussion started.
Thanks for starting the discussion, chuckleb!
I've a test bench in the office that I've starting putting together. My host has an E5-1650, 16 GB of RAM, and a Connect-X3 card with dual 40Gbe QSFP+ connections.
The plan is to install pfSense 2.2 and connect the Mellanox card by a QSFP-to-SFP(x4) breakout cable to a pair of systems each with an Intel X540-DA2 card.
It's not clear to me whether this connection option will work. However if it does, then I'll run speed tests between the clients with iperf.
As a backup plan, I'll just swap the Connect-X3 for another X540-DA2 into the host and run the same proposed tests between the clients.
Cheers
Lance

Links to other similar threads/posts that tie into this discussion:
Stupid 10Gbe question
Intel XL710-QDA2
 

Fairlight

New Member
Oct 9, 2013
21
3
3
Hi Lance, I am interested to know how your testing goes also so please keep us uptodate if you have time! :)

Thanks
 

Lance Joseph

Member
Oct 5, 2014
81
34
18
Hi Lance, I am interested to know how your testing goes also so please keep us uptodate if you have time! :)

Thanks
The Mellanox ConnectX-3 is out of the question (as others have already found out). The fan out cable works but only on the first SFP+ connector. So in essence, it behaves like the QSA (40Gbe to 10Gbe) cable.

I've my Solarflare (SFN7142Q) NICs tied up in other tests right now but may get to these tests sometime in the following week.

BTW, there's a Intel XL710 (single port) on eBay for $350 with shipping, if anyone's interested.
 

Lance Joseph

Member
Oct 5, 2014
81
34
18
I've been trying to get my hands on an Intel XL710 with QSFP+ ports - either single or dual-port.
Stock seems limited or unavailable everywhere that I've looked.

FWIW, I moved on from using QSFP+ fan out cables and just went with SFP+ direct attached copper.
I've had a little homebrew 10G switch running pfSense now for over a month and it's working well.
I'm running a pair of Intel X520-DA2 adapters and have lit up all four 10Gbe ports.
It's interesting to max out the processor interrupts during four-way benchmarks.
This setup works well for my needs but I wouldn't necessarily recommend it.
Setting up pfSense wasn't exactly a walk in the park either...
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
I've been trying to get my hands on an Intel XL710 with QSFP+ ports - either single or dual-port.
Stock seems limited or unavailable everywhere that I've looked.

FWIW, I moved on from using QSFP+ fan out cables and just went with SFP+ direct attached copper.
I've had a little homebrew 10G switch running pfSense now for over a month and it's working well.
I'm running a pair of Intel X520-DA2 adapters and have lit up all four 10Gbe ports.
It's interesting to max out the processor interrupts during four-way benchmarks.
This setup works well for my needs but I wouldn't necessarily recommend it.
Setting up pfSense wasn't exactly a walk in the park either...
Lance,

Sorry, you lost me. Why are you looking at the XL710-series and not the X710-series?