Cheap 40GbE at home - too good to be true?

Why my 40G upgrade was less expensive than 10G upgrade less than 3 years ago?

  • It was a fluke (got lucky with my 40G choices and unlucky with 10G)

    Votes: 5 13.9%
  • It was not a fluke (QSFP+ gear is indeed less expensive than SFP+)

    Votes: 19 52.8%
  • The prices have changed in the meantime - 10G is much cheaper today

    Votes: 10 27.8%
  • Other (please explain in the thread)

    Votes: 2 5.6%

  • Total voters
    36
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
would you be willing to share the conversion guide?
As I just explained in the original conversion thread - I did not keep the copy of the original guide because I did not have any intention to actually follow through. I just used the information to build my own custom image for easy flashing.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
No worries, op. I have no personal need for 40gbe in my home or lab. but I suspect others would love to set one up in their labs. And it’s a community of sharing which makes it a great site. Hence my reference to the switch mod post and poster. And also the main contributor to your low cost implementation.

As far as fluke or not, here is my take:

Earlier on, 40gbe And up had its start in hpc... mellanox was and is a major player in hpc markets and had the lead over others by far. Ib was,is cutting edge for latency and bandwidth. Just really never made it mainstream and remains the fabric of esoteric, custom systems.

the hpc guys are also never satisfied and always stretching performance, so we get loads of ‘old tech’ hitting the market with each cycle. All your gear is mellanox. No surprises there. Try to find intel gear, or any other manufacturer who has so much second hand gear... not really out there.. there is a dearth of gear to choose.

Now more recently mellanox has pivoting to Ethernet as they want to compete in that segment and Now we see more competition in the higher bandwidth tiers growing.

Best
 

herby

Active Member
Aug 18, 2013
187
54
28
TLDR: You're correct, 40GbE is just cheaper.

40GbE is a dead end, technologically, and was done with "trickery" by bonding 4x10G channels into a single link - it wasn't really a separate speed type, like 1G vs. 10G is. But, it was faster and didn't have the drawbacks of port-channels, so it got used.

Then 25GbE, which actually IS a non-trickery, separate speed type, was introduced, and was able to use the same trickery to get 50GbE and 100GbE. Further, the optics were intentionally forwards compatible with planned 100GbE tech that was still in development at the time, and THAT could do the same bonding for 400GbE.

So basically 40GbE was a dead end street, tech-wise. Once everyone saw a viable, faster forward path (that was actually not significantly more expensive, either) they dumped 40GbE hard and the market was flooded. Whereas 10GbE trickled in and continues to do so, as it has its uses for access-layer aggregation instead of datacenter-only, which is where 25/50/100/400GbE really lives.
I think this is pretty much it. I feel like it never got great traction except with people that really needed more than 10 GbE during a relatively short window. After that time passed people either continued gravitating to 10 GbE if they needed just a little more than gig, or are chasing the new higher end at 100+. 40 GbE became the middle child while both faster and lower still had a place.
 
  • Like
Reactions: ColdCanuck

atemik

New Member
Sep 11, 2019
3
0
1
Totally agree, 40/56Gbps can be had very cheap now (and silent!)

My experience with this is IS5022 (QDR, ~$80) or MSX6005 (FDR, ~$100), a bunch of ConnectX-3's (around $30 for HP branded) and a cables (10m fiber can be found for less than $15).
The best feature about IS5022 is on the attached photo. It's totally silent and cool. Ideal for home lab. BEWARE: pins for fans are non-standard here, you'll need to make an adapter or the fan will make some smoke!
MSX6005 has ICs on the bottom of the board so I'm unsure if this kind of cooling will be enough (and mine are in a datacenter so the noise is not an issue).
 

Attachments

Crond

Member
Mar 25, 2019
57
14
8
Mmmm did you guys update firmware on MSX6005 to convert it to Ethernet switch ?

IMHO 40GBE is not cheaper than 10GBE. the IB 40/56GB solutions on the other hand are much cheaper because the support for IB has been dropped by major players like vmware, so you can score crazy IB switch with terrabytes of switching capacity and 56Gb wire speed for under $100
A lot of OEM hardware is based on IB and thanks to their design for CX3 series a lot of IB solutions can be cross flashed to support 40GBE.

While it works for network hackers / home LABs experiments, it's not a solution for small biz. or individuals who are looking for something that will work out of the box.
finally there are plenty of options for silient or near silent 10GbE deployment while 40GBE is either expensive or for older model are quite noisy (assuming "stick back fan to open case" is not an option)

As a summary:
40/56GB IB solution or cross-flashes / open case design has much smaller market (due to DIY nature and noise/power consumption) compared to 10GBE while has a lot of hardware available due to major vendors dropping support for IB based fabric in favor RoCE

Proper 40GBE solution is still more expensive than 10GBE
 
  • Like
Reactions: ColdCanuck

atemik

New Member
Sep 11, 2019
3
0
1
can the IS5022 be converted to 40gbe?
AFAIK no, you'll need at least MSX6012 with "modifications" (that's for Mellanox switches).
If you need network only for file sharing/remote drives - then IBoIP can be ok.
If you are going into virtualisation then IPoIB won't to make it. You can't have many "IPoIB macs" on one card. I don't know if SR/IOV can solve this problem.
 

TheAcadianGamer

New Member
Jun 7, 2024
2
0
1
I got Mellanox MSX6012 for $120 shipped.

Since posting the poll I have done some more thinking and come up with another plausible reason for the price difference. Almost all 40GbE gear I got was sold as QDR Infiniband and I bought it knowing that it can be modified/reconfigured to run 40/56GbE instead.


Update: I went through my records to double check the price. It was listed as $140 + $20 s/h. The seller accepted my offer of $120, so I paid $140 shipped.

I know I’m a bit late to the thread, but I’m looking into setting up a 40GbE network for my rack and was wondering, how exactly does one get those Infiniband switches to run on Ethernet? Found a bunch of Mellanox QSFP switches on Ebay for pretty cheap, but they’re all for Infiniband, and I highly doubt my UCS mLOM QSFP cards will like that LOL. Any help would be appreciated!!
 

Stephan

Well-Known Member
Apr 21, 2017
1,032
799
113
Germany
If you write us a link or the PRECISE switch model we could answer the question.

Also never too late to upgrade to 40/56 Gbps. IMHO still the sweet spot between power consumption/heat and performance and very low cost. Cards will be content with PCIe 3.0 x8.
 
  • Like
Reactions: Exhaust8890

Crond

Member
Mar 25, 2019
57
14
8
I know I’m a bit late to the thread, but I’m looking into setting up a 40GbE network for my rack and was wondering, how exactly does one get those Infiniband switches to run on Ethernet? Found a bunch of Mellanox QSFP switches on Ebay for pretty cheap, but they’re all for Infiniband, and I highly doubt my UCS mLOM QSFP cards will like that LOL. Any help would be appreciated!!
If network is a bottleneck for your lab, it's hard to beat 40GbE for performance-per-$ ...

couple years ago I got inspired by this thread and bought Dell /EMC S6100-ON to upgrade my home lab ... but never got to use it, so if you need one PM me.

Before you buy the switch - consider upgrading / consolidating your servers. In my case by the time I got all cables and mlnx cards for upgrade, the prices for servers and NVME SSDs went down so I ended up buying 2 new servers and consolidated my entire home lab to 2 nodes cluster with direct connection between nodes (and retireing a bunch of older quanta d51b servers to boxes :) )
 
  • Like
Reactions: Stephan

dante4

Member
Jul 8, 2021
60
10
8
I know I’m a bit late to the thread, but I’m looking into setting up a 40GbE network for my rack and was wondering, how exactly does one get those Infiniband switches to run on Ethernet? Found a bunch of Mellanox QSFP switches on Ebay for pretty cheap, but they’re all for Infiniband, and I highly doubt my UCS mLOM QSFP cards will like that LOL. Any help would be appreciated!!
Check https://forums.servethehome.com/index.php?threads/arista-dcs-7050qx-32.11132/page-14#post-413248

Before you buy the switch - consider upgrading / consolidating your servers.
Also regarding this. You can buy ConnectX-4 and go for 100G NVMe-oF between nodes
 

TheAcadianGamer

New Member
Jun 7, 2024
2
0
1
If you write us a link or the PRECISE switch model we could answer the question.

Also never too late to upgrade to 40/56 Gbps. IMHO still the sweet spot between power consumption/heat and performance and very low cost. Cards will be content with PCIe 3.0 x8.
My bad on that one, it would probably help if I say what exactly I'm trying to install lol. I was looking at an IS5022 from Mellanox, but looking at a few replies on this thread it seems this model is impossible to get working with 40GbE. That being said, the Mellanox SX6005 would be within my price range as well and it seems they are at least capable of Ethernet switching.

If network is a bottleneck for your lab, it's hard to beat 40GbE for performance-per-$ ...

couple years ago I got inspired by this thread and bought Dell /EMC S6100-ON to upgrade my home lab ... but never got to use it, so if you need one PM me.

Before you buy the switch - consider upgrading / consolidating your servers. In my case by the time I got all cables and mlnx cards for upgrade, the prices for servers and NVME SSDs went down so I ended up buying 2 new servers and consolidated my entire home lab to 2 nodes cluster with direct connection between nodes (and retireing a bunch of older quanta d51b servers to boxes :) )
The funny thing is, networking right now isn't even a bottleneck! I'm just trying to see if I can pull this off since my three main nodes already came with dual QSFP mLOM cards installed, and I'm figuring may as well try to use them, right?

Regarding that Dell EMC switch, it is a slight bit overkill for what I was trying to implement, but it does look quite interesting :) . That and 40GbE out of the box would save some hassle. Do you know how loud it gets once it's running?
 

nimajneb

New Member
Apr 14, 2024
28
6
3
TLDR: You're correct, 40GbE is just cheaper.

40GbE is a dead end, technologically, and was done with "trickery" by bonding 4x10G channels into a single link - it wasn't really a separate speed type, like 1G vs. 10G is. But, it was faster and didn't have the drawbacks of port-channels, so it got used.

Then 25GbE, which actually IS a non-trickery, separate speed type, was introduced, and was able to use the same trickery to get 50GbE and 100GbE. Further, the optics were intentionally forwards compatible with planned 100GbE tech that was still in development at the time, and THAT could do the same bonding for 400GbE.

So basically 40GbE was a dead end street, tech-wise. Once everyone saw a viable, faster forward path (that was actually not significantly more expensive, either) they dumped 40GbE hard and the market was flooded. Whereas 10GbE trickled in and continues to do so, as it has its uses for access-layer aggregation instead of datacenter-only, which is where 25/50/100/400GbE really lives.
Is it dead because there's no faster equipment that is backwards compatible with it? I sort of understand the sentiment in your post (and I've seen it a lot before), but I don't quite understand why or what makes it a dead end. I am not a datacenter or network expert. I don't know a lot past hooking up the cables and selecting compatible equipment.
 

dante4

Member
Jul 8, 2021
60
10
8
Is it dead because there's no faster equipment that is backwards compatible with it? I sort of understand the sentiment in your post (and I've seen it a lot before), but I don't quite understand why or what makes it a dead end. I am not a datacenter or network expert. I don't know a lot past hooking up the cables and selecting compatible equipment.
What he meant is that 40G will not become 80G for example. That's what he calls dead end. Since 25G (and 100G=25x4) became a thing soon after 40G the 40G got forgotten.
I.e. for example 100G continued to 200G to 400G to 800G.
But 40G is ended as 40G, there were no 80G, 160G etc...

But the backward combability is still there, since all 100G is just 25x4 and all 25G have backward combability with 10G
 
  • Like
Reactions: TRACKER

nimajneb

New Member
Apr 14, 2024
28
6
3
What he meant is that 40G will not become 80G for example. That's what he calls dead end. Since 25G (and 100G=25x4) became a thing soon after 40G the 40G got forgotten.
I.e. for example 100G continued to 200G to 400G to 800G.
But 40G is ended as 40G, there were no 80G, 160G etc...

But the backward combability is still there, since all 100G is just 25x4 and all 25G have backward combability with 10G

For the used the used market or home deployments where the user doesn't care about support if it's all backwords compatible, I guess I don't understand why it's a dead end. If I currently have a 40Gb network and I buy a 100G swtich as my upgrade path and that switch is backwards compatible with 40Gb gear I don't immediatly upgrade I don't see why it matters.

For the business or actual deployments I understand that you wouldn't want 40G equipment since it's either quite old or out of support.

Sorry, that's a lot of words to say if it's backwards compatible I don't know why it matters.
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
We use a ton of 40gb links in the datacenter, switches, optices and nics are all cheap. Also so far we have found any port that will do 100gb it all also do 40gb.