for those who wants to build their own 10gb switch

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

talsit

Member
Aug 8, 2013
112
20
18
That was a great video. The first two videos in the series are good too. They made a one stop reference for a lot of the information I've seen here concerning 10G cards, fiber, SFP+, etc.
 

NashBrydges

Member
Apr 30, 2015
86
24
8
57
That's a pretty cool video but the only use-case scenario I can think of where this may be beneficial is if you need mixed 10G fiber and 10G copper on the same switch. Otherwise the hardware costs alone would be way more than I can buy a used Quanta switch for. Not to mention this build would most likely be much more power hungry.
 
Aug 17, 2016
32
1
8
34
+1 on power consumption.
my ubiquiti switch runs about 15w with half ports being utilized at 10GBE. you will never achieve this in a comparable x86 setup.
there are many x86 switching operating systems out there. but they all have the same drawbacks.
on x86 you will be burning cpu cycles just to transfer data.
on a decent 10GBE switch you can max out all ports with almost 0 switchCPU use in a simple data transfer scenario.
the price of 10GBE equipment is dropping quite rapidly in the market. I expect to see some more competitive offerings from the smaller players next year.
 

Hank C

Active Member
Jun 16, 2014
644
66
28
ya...that's what I thought too...using x86 for switching will tax the cpu usage and need a really good one for it?
 

nj47

New Member
Jan 2, 2016
13
4
3
32
Here's where I think this could be useful:

I've got a zfs storage server, 2 proxmox nodes, pfsense firewall, and a desktop that I would love to be all be on 10GbE. 4-port 10GbE switches are now reasonable in price but in this setup which node would you keep on 1 GbE? I'm not aware of any 10GbE switches with 5+ ports that are reasonable in price AND are power efficient (LB6M idles at 120W!!)

However, after watching the video, I got the idea that I can put vyos in a VM with 2 x 2x10GbE NICs attached w/ pci passthrough. The fifth 10GbE port is just a linux bridge w/o any hardware attached. This bridge would be what all the other VMs on that server connect to. Now, I can get everything running on 10GbE today, I already have all the hardware I need!

The downside to this is the same downside to running pfsense in a VM instead of dedicated hardware - you can't touch the underlying hardware w/o taking your network down with it. I did that once, and pretty quickly bought another server to be a dedicated pfsense box - so I hesitate a little for that reason. Though I _think_ I should be able to add the 10GbE on top of the existing 1GbE connections and setup active/passive bonding of the 2 interfaces, but I've never actually used that before so will need to experiment.

So while building out a 16 port x86 software switch isn't the best idea - I think there are some useful cases where when weighing all the tradeoffs, a 5-10 port software switch could beat out the "better" hardware switch.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
@nj47 why not just use pfSense with all the NICs?

IIRC aren't there 4 port 10Gb NICs that have switches on the NIC? Maybe that's the answer.

This was a cool idea years ago but these days 10gb gear is dirt cheap. 40gb is going to be next since everyone's moving to 25/50/100
 
  • Like
Reactions: Patrick

Hank C

Active Member
Jun 16, 2014
644
66
28
i was looking on ebay for 40gbe switch and saw several models with 4 or 6 QSFP with 48 SFP+ switch about 2000~4000....so building your own switch might not be so visible afterall if you need that many ports. with normal server you won't be able to achieve that with that price and # of PCIE slots
 

nj47

New Member
Jan 2, 2016
13
4
3
32
@nj47 why not just use pfSense with all the NICs?

IIRC aren't there 4 port 10Gb NICs that have switches on the NIC? Maybe that's the answer.

This was a cool idea years ago but these days 10gb gear is dirt cheap. 40gb is going to be next since everyone's moving to 25/50/100
It's a small m-itx board that already has a dual port 10gbe nic in it's only pcie slot.

For what used 4 port 10GbE nics are going for on ebay, I'd just buy a 24x10GbE switch - and still come out ahead. (I suspect they're too uncommon to have fallen in price proportionally to the rest of the 10GbE equipment.

> 10gb gear is dirt cheap

Relatively speaking... It's still $500-800 for a switch with more than 4 SFP+ ports (besides the LB6M - but it's like $20 / month running that even with no traffic flowing)

> 40gb is going to be next

Which is _why_ I think this is a good idea now - I can get the full 10GbE network I want for $0 while waiting for 40GbE switches to fall in price.
 
Last edited:

epicurean

Active Member
Sep 29, 2014
785
80
28
Would it be possible to build a switch with a combination of 40GB QSFP cards, 10GB SFP+ cards and 10BaseT cards , have them all do 10GB seamlessly as a network switch connecting mostly esxi servers?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,806
113
@epicurean at 40GbE and with crossing cards you are not going to like the results.

Let's put it this way... I spent a lot of money on this project and ended up scrapping it and not publishing because the performance was that sub-par. It also became too expensive once you started adding many ports with newer cards that had decent offload.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
@nj47, it would be dirt cheap. And it would perform like it...

There are some fundamental problems you have to overcome. You can get all the features and capabilities you want with vyos. But its running in Linux user space - which means you will have significant latency getting packets off the NIC and into the software switch. For a 1Gbe switch this can be troublesome - but for a 10Gbe switch it is a disaster. Even a few ms latency at 10Gbe can be larger than the transport time of a single 9000 byte MTU packet, meaning you cut your speed by at least half right out of the gate. Latency interactions with protocols like TCP will make it even worse.

The whole thing gets more complicated if you have more than 1 link actively transmitting. With vyos in a VM, even with NIC passthrough, you lose features like RSS that let you spread network traffic across multiple cores efficiently.

I'd wager that you wont get more than 2-3Gbps throughput/NIC on such a switch - worse, much worse, if your workload is skewed towards small packets. BTW - i tried this a couple of years back and proved to myself that what is viable at 1Gbe just doesn't work at 10.

It just makes no sense with 4+48 options running around $400 (X1052, others), high-power 200w 24 port options (LB6M) regularly hitting $250-400, and brand new, low power, 16 port options around $600 (Ubiquiti - albeit with crappy build quality - but still better than the Frankenswitch approach)
 
  • Like
Reactions: Patrick

wildchild

Active Member
Feb 4, 2014
389
57
28
@nj47, it would be dirt cheap. And it would perform like it...

There are some fundamental problems you have to overcome. You can get all the features and capabilities you want with vyos. But its running in Linux user space - which means you will have significant latency getting packets off the NIC and into the software switch. For a 1Gbe switch this can be troublesome - but for a 10Gbe switch it is a disaster. Even a few ms latency at 10Gbe can be larger than the transport time of a single 9000 byte MTU packet, meaning you cut your speed by at least half right out of the gate. Latency interactions with protocols like TCP will make it even worse.

The whole thing gets more complicated if you have more than 1 link actively transmitting. With vyos in a VM, even with NIC passthrough, you lose features like RSS that let you spread network traffic across multiple cores efficiently.

I'd wager that you wont get more than 2-3Gbps throughput/NIC on such a switch - worse, much worse, if your workload is skewed towards small packets. BTW - i tried this a couple of years back and proved to myself that what is viable at 1Gbe just doesn't work at 10.

It just makes no sense with 4+48 options running around $400 (X1052, others), high-power 200w 24 port options (LB6M) regularly hitting $250-400, and brand new, low power, 16 port options around $600 (Ubiquiti - albeit with crappy build quality - but still better than the Frankenswitch approach)
Just got my ubiquiti saturday, how ever the build quality is excellent for mine, and if performs well too.

As for ubiquiti in general, have their ac-pro ap's , and their firmware finally seems to improve too, eventhough it's been rather terrible over the past year.

The franken switch would be ok if you have most everything laying around, if you have to purchase everything you'd be better off just getting a new one
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
The franken switch would be ok if you have most everything laying around, if you have to purchase everything you'd be better off just getting a new one
I tried a franken switch and it worked OK. I stuck a couple of 25 dollar 10GBE nics in a pfSense box and it "just worked". It worked at near wire-speed between a couple machines across the router. I posted my notes on this forum earlier this year.

The way I see it is a franken switch can probably hold you over until you can afford a modern switch as long as you have a motherboard with enough pcie slots. I wouldn't do it with high price multi channel cards or even modern cards. But, four ports for less money and power consumption than a LB6 ... if you can get away with that few ports. Much above that and you probably want to wait for prices to come down or bite the bullet and tolerate the noise / power of cheap 10G switch.
 

Jon Massey

Active Member
Nov 11, 2015
339
82
28
37
I got a little suspicious when he says that when doing the inter-card switching he managed to get full 10GbE rates but the screen recording failed
 
  • Like
Reactions: Patrick

ryand

New Member
Aug 30, 2021
1
0
1
i am the ghost from 100gig future in 2021! i am reviving this thread to add my use case to this interesting franken switch..

right now there are alot of 10gig switches that are pretty cheap, but not so much with 2.5 and 5.0 nbase-t / aka/ multigigabit copper ethernet. you're looking at $500+ for a managed multi giga bit switch. I need to add 4 hosts that have 2.5g-base-t ethernet cards to my vlan'd network. 2 hosts are ESXi hosts running on 2012 era Mac Mini's. 1000mbps ethernet isnt cutting it. i have 1 mac mini which gets a thunderbolt to pcie adpater so i can plug a 10gig fiber card into it.. but i also have 2x minis that can only get usb NICs. So i am going to put 2x 2.5gig usb ethernet nics on each mini and then im going to bond them together, giving me 5gbps total. the usb nics cost $35 each so for $70 i get 5gbps copper ethernet per node vs $130 for a 5gbps single card or $200+ for a thunderbolt to 10g-base-sr card.

The franken switch is great for me in this case also because i have an office to which i have run 10gig fiber to but their hardware it too old to support 10 gig fiber cards . kinda the same problem as with the mac minis i mentioned above. the office needs to put a mesh wifi puck into the ethernet lan and provide 1.5gig internet to the user in that office. the franken switch is perfect cuz i can run a 10gig fiber line into the office and bridge it over to a 2.5gig card , as well as a 1000 card and also still have another 10gig fiber port or two. too bad vyos is so cryptic and lacking a web interface.

im curious how vyos performs in terms of wire speed? am i able to expect 10gig wirespeed if my franken switch is running on an intel core 2 duo , 4 core 2.8ghz cpu, w/ 4 or 8 gig of ram? for vyos boot. what hardware would i need at minimum to get wire speed? and finally. am i correct in myunderstanding that since this is just briging and that means its blindly forwarding packets, theres no routing needed , that it should run much closer to wire speed then if i was doing l3 routing becasue that means each packet needs to be inspected before it'd routed ?