Using spare parts to make a10Gb switch

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ziggygt

Member
Jul 23, 2019
65
10
8
Problem:
I have a work area far from my 10Gbs switch. I have a single 10Gb LC cable strung there. I wanted to temporarily work on a few 10Gb clients in that location.

Possible Solutions
1) Buy a cheap 10Gb unmanaged switches that are coming out
2) Build a switch with parts lying around
  1. I have 4 Solarflare dual port 10Gb cards and an HP z400 MB with 4 PCIe slots lying around. I thought I would build a switch with them.
The end solution draws 130 watts. Most of the power is the Z400 with X5650 CPU w/16GB memory. 128GB SSD, no other drives.
The motherboard draws 110 watts. I used it because i had it and it had the desired number of PCIe ports.

There are probably simpler solutions but I used TrueNAS and created a bridge through the web interface.
Steps:
  1. Set the 1Gb interface up as a management port as 192.168.1.X/24 (where X is the address you want, the first part of the address in arbitrary use what is compatible with your network)
  2. Go to another machine that is on the 192.168.1.x+1 subnet, log into 192.168.1.X
  3. Log in and using the web interface create a bridge with all the 10Gbs ports
  4. add the command "up"
  5. set the IP address of the bridge to 192.168.0.X/24 as not two interfaces can be on the same
  6. This will initiate a test of the config. Acknowlege that and save the configurtion
  7. Go to each 10Gbs port listed in the interfaces section and add the "up" command - I used 3 dual port cards so a 6 port switch
  8. this will initiatet a test of the config. acknowledge that and save the configuration
That's it.
Initial performance tests show similar peak performance but has more variability than a real switch.

Resolution:
Two problems encountered: One of my Solarflare cards is really dead, dragging down the supply solo. It's the junk heap for that. The 650-watt supply i was trying to use just plain bad and could not maintain the load. I used the 550watt supply and it appears to be fine.

I hope this helps anyone else that want to create a switch. Why TrueNAS. I am already familiar with it. I might put a lightweight task on it in a jail. I want to experiment with TrueCommand for the other servers I have and it might be useful to see it there.
-------------------------------------
 
Last edited:

blunden

Active Member
Nov 29, 2019
502
162
43
Software based switching will always draw significantly more power and perform worse. There is a reason use switch chips and not general purpose CPUs for network switching. :)

Unless power is essentially free where you live, software switching is simply a bad idea.
 
  • Like
Reactions: fohdeesha

ziggygt

Member
Jul 23, 2019
65
10
8
Software based switching will always draw significantly more power and perform worse. There is a reason use switch chips and not general purpose CPUs for network switching. :)

Unless power is essentially free where you live, software switching is simply a bad idea.
I agree with you that this is not a cost-effective general solution. I am using it only when I am working in that area, so it will not be powered on 24/7, and I will be done using that area in a year. Here are the choices I see.
  1. The Z400 based unit @130 watts at 17 cents/Kilowatt is $197/Year and around $20/year @10% usage. (I picked this because it is what I had and few consumer boards have that many free PCIEe slots. (medium noise) The MB draws most of the power it is hard to see the additional load when adding the 10Gb cards. The CPU does not seem too busy when using iperf to test performance. (still evaluating)
  2. One of the new 8 port 10Gb unmanaged switches are @13 watts so $19/Year. TP-Link TL-SX3008F 8 Port 10G SFP+ (no noise)
  3. An example of a used switch HP/ARUBA S2500-24P-4X10G is 45-95 watts so at best $65/year (Noisy, it is my main switch)
With the price of the new 10Gb switches so low, it seems most surplus switches are e-waste. This SW switch is effective for my intermittent temporary needs, but it is not a general-purpose solution. Noise was one of the big decision points here, so I decided to put it in a separate chassis rather than just add it to my backup server.
 

Michal_MTTJ

New Member
Apr 12, 2024
24
6
3
~5..6 years ago build 100GbE "software" switch on Z420 1620v2, and ConnectX-4 card(s)

yes it was working, back then we was not able to saturate the connection to 100% but around 9-10.5GB/s was possible (should be 11)
also in the same era ConnectX-2 SFP+ was inside our HP 7800 Core2Quad Q9550 and ~10GbE was possible too (our old firewall), or also almost. I don't remember if it was PCIe 1.0 or 2.0.

but... Celectica DX010 100G for $300-400!!! just run 10G (plus $10-20 adapter for SFP+) on 100G switch is the simplest way.

if it's a just for sport, yes it should be fast enough (X5650 is a PCIe 2.0 or 3.0?), but the latency will be there comparing to Cisco or Mellanox.
for 10GbE only also smaller computers with Celeron are ok, even with PCIe via chipset. Ofc. Celesica DX010 ;) = no sense to built them.
 
Last edited:
  • Like
Reactions: ziggygt

blunden

Active Member
Nov 29, 2019
502
162
43
I agree with you that this is not a cost-effective general solution. I am using it only when I am working in that area, so it will not be powered on 24/7, and I will be done using that area in a year.
Oh, I see. As a temporary deployment from time to time, that could make sense in situations where one owns the hardware already. :)

Unfortunately I don't have any particular suggestion.
 
  • Like
Reactions: ziggygt

ziggygt

Member
Jul 23, 2019
65
10
8
Celectica DX010 100G for $300-400!!! just run 10G (plus $10-20 adapter for SFP+) on 100G switch is the simplest way.

if it's a just for sport, yes it should be fast enough (X5650 is a PCIe 2.0 or 3.0?), but the latency will be there comparing to Cisco or Mellanox.
for 10GbE only also smaller computers with Celeron are ok, even with PCIe via chipset. Ofc. Celesica DX010 ;) = no sense to built them.
Yes, It took much more effort to get it assembled than I thought as some of the stuff was broken. The Celectica DX010 would be amazing at <$275 (ebay). I did not have good luck with QSFP+ adapters in the past with 40Gb Mellanox cards I had. After that I stayed clear of a 40Gb or 100Gb dream. I have no use case for that anyway. I was seriously looking at another TP-Link TL-SX3008F 8 Port 10G SFP+ or one of the new low-cost Chinese unmanaged switches like this one on Amazon. I just used what I had.
 

Michal_MTTJ

New Member
Apr 12, 2024
24
6
3
Yes Mellanox ConnectX-2 (10GbE) are built like a tank (better word: like ordinary graphics card), ConnectX-4/5/6 (25/50/100GbE) also, but ConnectX-3 40GbE not, and you can easily destroy capacitors.

wow QSFP+ 40 GbE hardware is super cheap now:

Mellanox ConnectX-3 MCX354A-FCBT CX354A VPI 40/56GbE Dual-Port QSFP Adapter | eBay