Experiences with Arista 7050TX or equivalent

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

solon

Member
Apr 1, 2021
61
5
8
Hello all, I'm considering purchasing something like a Arista DCS-7050QX-32S-R 40GB QSFP+ 4x10G 3 Layer Switch for home use. I don't hae any experience with Arista but going by the specs this seems a reasonable way to have 40Gbe between the machines where I have qsfp cards and 10gbe for everything else with hopefully minimal hassle. 2Gbit internet is on the horizon here and this allows fast file access for all systems while at the same time removing the 1gbit bottleneck that I currently have in my home network.

I'm hoping someone with experience with these things can answer a few questions.
1. I like to 3D print te neccesary ducts to cool with noctua 200mm fans to decrease sound. I read somewhere here that the system is likely to throw errors below the configured fan rpm which can be set at 5000 minimum. Obviously a 200mm noctua is much slower than that and I'd like to know if that will only cause non-critical errors or whether that might impact functionality.
2. Secondary, and somewhat less important I am wondering if I need to be concerned about any sort f licensing issues for anything beyond basic switching functionality. Ideally there's some network parts that I'd like to be able to isolate from one another, for instance.
3. Information wise, does anyone have an idea of the power consumption I'm likely to expect with 4x 40gbe over copper and say 10 of the 10gbe ports populated. I don't expect much more that light use in practice.
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
1) Why?

Before you "butcher" an arista switch you should do more research: there is a command to set the fan speed to 30% which makes them very quiet for datacenter switches ._.
2) I have a 7050q-16, all it needed was to "touch" a file to allow/silence warnings about third party transceivers
3) the 7050 switches are quite power hungry compared to some other switches becuase they use larger processes for the (older) asics, mellanox switchx-2 based system for example consume less power for the same port count & load (user reports here in the forums)
 
  • Like
Reactions: tinfoil3d

solon

Member
Apr 1, 2021
61
5
8
The why is simply because my definition of quiet has, so far, not even remotely been achievable with original fans or even with smaller noctua's. A 200mm noctua's sound level is below background. Unless the environment is very quiet, you simply hear nothing, which is what I'd like to achieve. It's exceedingly unlikely a ramped down 15k fan is able to be quiet enough to meet my requirements.

The power information is interesting, I haven't really found anything on offer from Mellanox in the EU with the same port combinations (say minimum 4x qsfp and minimum12x 10gbe). I'll check again if there really isn't anything else on offer. I'll find those user reports, thanks.
 

tinfoil3d

QSFP28
May 11, 2020
880
404
63
Japan
Also thse switches are compatible with SONiC. It's a good choice and they're actually not that loud with 30% fans as set in arista os.
However this doesn't have 10 native sfp+ ports, only four. You can do breakouts but in that case we're not really talking about power consumption on the switch side, with copper it's minimal, with AOC it's split.
Power draw reports: https://forums.servethehome.com/index.php?threads/power-consumption-thread.34673/
 

solon

Member
Apr 1, 2021
61
5
8
Arista DCS-7050TX-72Q-F | 48-Port 10GBASE-T | 6x 40G QSFP+ | dual PSU | Rails

Is sort of what I'd ideally want, as far as sufficient qsfp+ ports goes. 4 ports would leave no room for any kind of expansion. (Assuming that this supports 40gbe, I have a separate infiniband network for rdma storage) I wouldn't need 48 10gbase-t ports, but as far as what's available on ebay in the EU I'm really not seeing any hardware that has the same combination of ports in any numbers that are useful for my application.

In the power consumption list I'm seeing that a 7050QX idles at 97W or about 850kWh a year, which translates to about 280,- euro's of power or about 76,50 euro's a year as long as the government here is silly enough to allow me to continue to use the grid as a battery for the solar panels. I'll need to consider the power budget, but in general it does look like an Arista 7050T or TX should do the job I'd like it to.
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Also thse switches are compatible with SONiC. It's a good choice and they're actually not that loud with 30% fans as set in arista os.
However this doesn't have 10 native sfp+ ports, only four. You can do breakouts but in that case we're not really talking about power consumption on the switch side, with copper it's minimal, with AOC it's split.
Power draw reports: https://forums.servethehome.com/index.php?threads/power-consumption-thread.34673/
I had SONiC on my 7060CX for around a year, and it was a *TERRIBLE* choice for the hardware. There are too many things that it doesn't do, and IMO SONiC doesn't make sense unless you're in a place where you're (a) using BGP for everything and (b) you can maintain your own custom build for your hardware. Arista's EOS is vastly more flexible with better debugging.

If you're trying to build a datacenter, consider SONiC. If you have 1 or 2 switches, use something else.
 
  • Like
Reactions: wifiholic

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Arista DCS-7050TX-72Q-F | 48-Port 10GBASE-T | 6x 40G QSFP+ | dual PSU | Rails

Is sort of what I'd ideally want, as far as sufficient qsfp+ ports goes. 4 ports would leave no room for any kind of expansion. (Assuming that this supports 40gbe, I have a separate infiniband network for rdma storage) I wouldn't need 48 10gbase-t ports, but as far as what's available on ebay in the EU I'm really not seeing any hardware that has the same combination of ports in any numbers that are useful for my application.

In the power consumption list I'm seeing that a 7050QX idles at 97W or about 850kWh a year, which translates to about 280,- euro's of power or about 76,50 euro's a year as long as the government here is silly enough to allow me to continue to use the grid as a battery for the solar panels. I'll need to consider the power budget, but in general it does look like an Arista 7050T or TX should do the job I'd like it to.
I'm seeing *slightly* higher power use with 2x 7050QX-32Ses -- 100-110W over the past 12 months. At 30% fan they're fine in a closet, but I probably wouldn't want one near my desk.
 
  • Like
Reactions: wifiholic

tinfoil3d

QSFP28
May 11, 2020
880
404
63
Japan
I had SONiC on my 7060CX for around a year, and it was a *TERRIBLE* choice for the hardware. There are too many things that it doesn't do
Thanks, yeah I don't play with BGP, I don't have ASN, but off the top of your head, what were the main issues you experienced with sonic on 7060?
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Well, the first few issues were missing features. Some of these have been partly fixed since then:

- no OSPF
- no Spanning Tree
- port breakout was hard
- switching between 10/25 (or 40/100) needed manually changing multiple port settings: RS encoding, etc.
- iirc LACP wasn't working either, but I didn't really need it.

Plus there were operational issues:

- software updates reset the password to the default.
- it needed more than half of the flash on my 7060CX, meaning that updates were hard.

Plus bugs:

- sometimes ports just wouldn't come up until I rebooted the switch.
 

tinfoil3d

QSFP28
May 11, 2020
880
404
63
Japan
No STP? How's that a switch then? That's not gonna work easily at DCs where it's truly necessary. We, homelabbers can live without it for sure, but not a regular DC
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Naw, STP is terrible in a big DC for many reasons. But it's unavoidable in enterprises, and anyplace where you end up with lots of weird one-off configs.

For big DCs, just do everything at L3 and you don't need to worry about STP or any of its inefficiencies. If you *need* L2, use something like EVPN-VxLAN on top.

FWIW, the 7050QX with EOS is perfectly capable of doing EVPNs and VxLAN in hardware. That's kind of rare for its vintage.
 
  • Like
Reactions: gb00s and klui

Scott Laird

Active Member
Aug 30, 2014
317
148
43
Yeah, the biggest (fundamental, not just incidental) problem with STP is that you can never add bandwidth with STP, just cold-standby links. Ignoring things like [M]LAG for the moment, if there is more than one path from device A to device B through a L2 network with STP, then STP's job is to pick one of the redundant paths and turn it off. That's its whole thing, and how it works.

I've attached a simple model with 2 hosts, each connected to a pair of ToR leaf switches, each of which are connected to a pair of core switches. Everything is connected up redundantly. With STP, 8 of the 15 links will be blocked. Switch C2 won't handle any traffic at all. You'll still have redundancy, but no added capacity anywhere.

While at L3, every link would be independent, and whichever dynamic routing protocol you use would add ECMP routes that distribute traffic across all available equally-good links. So you'd effectively have 2x the available bandwidth using the exact same hardware.

Even better, the L3 model scales much better with a "spine switch" model instead of a "core switch" model, where you could have many more than 2 switches in the middle of everything, or even multiple tiers of switches if you want to build a multi-thousand port non-blocking fabric. That's all a solved problem at L3, vs being ~impossible at L2.
 

Attachments

  • Like
Reactions: tinfoil3d and klui

klui

Well-Known Member
Feb 3, 2019
842
462
63
Thanks for the insight. Hardware sitting idle not being used would be one of the other factors not using STP or variants in the data center core.

Besides that, the reddit thread mentions latency when STP re-evaluates the topology this delay will also unnecessarily interrupt day-to-day operations.
 
  • Like
Reactions: tinfoil3d