Cheap 40GbE at home - too good to be true?

Why my 40G upgrade was less expensive than 10G upgrade less than 3 years ago?

  • It was a fluke (got lucky with my 40G choices and unlucky with 10G)

    Votes: 0 0.0%
  • It was not a fluke (QSFP+ gear is indeed less expensive than SFP+)

    Votes: 15 65.2%
  • The prices have changed in the meantime - 10G is much cheaper today

    Votes: 6 26.1%
  • Other (please explain in the thread)

    Votes: 2 8.7%

  • Total voters
    23

BeTeP

Well-Known Member
Mar 23, 2019
580
378
63
This is not a humble brag post - I am genuinely curious if I am missing something. It's been a few weeks since I upgraded my home network to 40Gbps. So far it has been running smoothly and the upgrade process was pretty painless and significantly less expensive comparing to the 1Gbps to 10Gbps upgrade I went through about 3 years ago.

My home network consists of 2 workstations and half a dozen servers in 2 adjacent rooms in the basement and about 30 some odd "slow" devices throughout the rest of the house. All my "high speed" devices are located in close proximity from each other - so using SFP+ adapters with DAC cables was the obvious choice. My main switch progression went "naturally" from "48x 1G" to "48x 1G + 1x 10G" to "48x 1G + 2x 10G" to "48x 1G + 4x 10G" to finally "48x 1G + 8x 10G" over the course of 2+ years. Which was not the greatest idea from the financial point of view. I also replaced some adapters as well. All in all I spent close to $1500 on 10GbE related gear (switches/adapters/cables/etc). Had I skipped some intermediate upgrades I would still have come just under a $1000.

I was expecting the next step up to be even more expensive. And I was very surprised to find out how affordable the 40GbE hardware is. I have spent less than $400 on a 12-port 40GbE switch + 12 ConnectX-3 cards + 12 QDR cables of various lengths + 1 FDR cable (just to confirm that the whole setup is 56GbE capable). The only thing I did differently this time around was using separate switches for my "slow" and "fast" networks which are now connected via a single 10G link.

I think I am going to sell my ICX7250-48 (which oddly is selling higher now than when I got it) and replace it with some old 48-port POE switch with a single 10G uplink.
 

Mishka

Active Member
Apr 30, 2017
101
34
28
London, UK
Just wondering but what 40GBE switch did you manage to grab?

In general stuff is cheap if you shop around especially for 3+ year old enterprise gear which is now out of warranty so companies wont want to use it incase things break.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
32
deslok.dyndns.org
Seconded on what 40GBe switch you got, I've been keeping an eye on the mikrotik CRS354-48P-4S+2Q+(basically my ideal configuration unless it's a screamer) but sub 400 for a 12x40 is worth a look at least ;)
 

BeTeP

Well-Known Member
Mar 23, 2019
580
378
63
I got Mellanox MSX6012 for $120 shipped.

Since posting the poll I have done some more thinking and come up with another plausible reason for the price difference. Almost all 40GbE gear I got was sold as QDR Infiniband and I bought it knowing that it can be modified/reconfigured to run 40/56GbE instead.


Update: I went through my records to double check the price. It was listed as $140 + $20 s/h. The seller accepted my offer of $120, so I paid $140 shipped.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
3,521
1,210
113
33
Germany
I think it has something to do with all the ai/machine learning things that changed the bandwidth requirements massively and led to the fast "jumps" from 40 to 100gbe and from 100 to 200/400gbe in networking (and other areas) for enterprises.
 

BeTeP

Well-Known Member
Mar 23, 2019
580
378
63
The 40GbE tech is old and cheap. But the 10GbE tech is even older but it did not depreciate nearly as much. That was my original point.
 

am45931472

Member
Feb 26, 2019
83
17
8
my understanding is that those infiniband switches are loud and consume a ton of power, even if they can be modded for ethernet.
 

Kal G

Active Member
Oct 29, 2014
160
44
28
43
my understanding is that those infiniband switches are loud and consume a ton of power, even if they can be modded for ethernet.
They are definitely loud. Not bad once they've booted though (51 dB at 3 feet).

The power draw can be surprisingly good. The SX6012 on my test bench is 30W at idle and 44W while booting. I spun the fans all the way up and it measured 55W. Each DAC adds between 0.5-1.0W.

* Bear in mind these are rough numbers at idle and don't include any network traffic.
 

am45931472

Member
Feb 26, 2019
83
17
8
They are definitely loud. Not bad once they've booted though (51 dB at 3 feet).

The power draw can be surprisingly good. The SX6012 on my test bench is 30W at idle and 44W while booting. I spun the fans all the way up and it measured 55W. Each DAC adds between 0.5-1.0W.

* Bear in mind these are rough numbers at idle and don't include any network traffic.
Any way to do a fan swap on these? make them quieter.

Literally 3 seconds before I post this, someone else asks the same thing. haha
 

Kal G

Active Member
Oct 29, 2014
160
44
28
43
I wonder if it's possible to swap out the fans in the SX6012 for something quieter, maybe some small noctuas?
You could, but Noctuas don't have the static pressure to dissipate the heat output.
 

am45931472

Member
Feb 26, 2019
83
17
8
You could, but Noctuas don't have the static pressure to dissipate the heat output.
yeah, i mean that is the other half of the equation. just because you can swap fans will that still be enough to keep it cool. all i'm asking for here is a silent 40GB ethernet switch for my home lab. lol
 

llowrey

Active Member
Feb 26, 2018
152
129
43
What CX-3 cards did you buy? I've been buying HP 544QSFP cards reflashed to stock MCX354A-FCBT. I have not been able to keep them connected at 56Gbps reliably. My lone CX-4 card has no trouble at 56Gbps. I've shuffled around ports and cables and observed no change in behavior so I think the ports and cables are good. I'm wondering if these HP 544QSFP cards are too old and not designed to 56gbps spec.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
32
deslok.dyndns.org

am45931472

Member
Feb 26, 2019
83
17
8
no word from @Patrick on the noise yet but he did get his CRS326 -24s-2q-rm in finally, if you only need 2 40gb ports it might be a valid option?
https://forums.servethehome.com/ind...ik-crs326-24s-2q-rm-thread.26135/#post-241206
Yeah. only 2x 40GB qsfp is kind of a why bother for me. Also, I would like to avoid the mikrotik. I've had their stuff before, its fine. but I dont like the cpu off load of key functions, and the really poor L3 performance. I dont really consider them enterprise class. I love my brocade 6610, however it cannot be silenced. no fan mods possible.
 

BeTeP

Well-Known Member
Mar 23, 2019
580
378
63
What CX-3 cards did you buy? I've been buying HP 544QSFP cards reflashed to stock MCX354A-FCBT
I bought a lot of 12 Mellanox branded MCX354A-QCBT cards without brackets for $120 shipped. I flashed them with FCBT firmware and installed brackets which I bought separately for about $4 a piece. I just confirmed that they link at 56Gbps with FDR cable but I have not run any stress tests so I can't really vouch for their stability at 56G.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
The big savings is, of course, getting a 40gbe switch for a steal... all with the assist from a generous poster here who shared the secret sauce to convert a dumb emc ib switch into a fully functioning l3 40gbe switch. :)

So It helps to be in the right place at the right time.

would you be willing to share the conversion guide?
 

Wolfstar

Active Member
Nov 28, 2015
159
83
28
47
TLDR: You're correct, 40GbE is just cheaper.

40GbE is a dead end, technologically, and was done with "trickery" by bonding 4x10G channels into a single link - it wasn't really a separate speed type, like 1G vs. 10G is. But, it was faster and didn't have the drawbacks of port-channels, so it got used.

Then 25GbE, which actually IS a non-trickery, separate speed type, was introduced, and was able to use the same trickery to get 50GbE and 100GbE. Further, the optics were intentionally forwards compatible with planned 100GbE tech that was still in development at the time, and THAT could do the same bonding for 400GbE.

So basically 40GbE was a dead end street, tech-wise. Once everyone saw a viable, faster forward path (that was actually not significantly more expensive, either) they dumped 40GbE hard and the market was flooded. Whereas 10GbE trickled in and continues to do so, as it has its uses for access-layer aggregation instead of datacenter-only, which is where 25/50/100/400GbE really lives.