100Gb Switch Recommendations needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

CoachOG2U

New Member
Jul 1, 2024
9
0
1
I am trying to find a 4+ port 100Gb Ethernet switch for our office (4 people who need 100G). Connection from fifteen total office PCs to on site server in the same building. There will not be a huge amount of traffic. We are currently using Dropbox, but we have experienced syncing issues and want to have a local copy of our data (16TB). I am the IT guy and in a unique position that I am building our network and have time and resources to learn and build our system. I am new to 100G networking and looking for the simplest (not easiest) setup for a 100G switch. I don't know what I don't know, and I am not aware of everything that goes into such a task.

Things I do not understand (if you know where I can find this info please let me know)
1. Additional costs apart from hardware
2. What I need to know about buying new vs used
3. L2 vs L3 (I want the ability to monitor)
4. OS and what capabilities I want
5. Off the wall things that happen that you would not know unless you have been doing it for a while (i.e. noise)
6. Compatibility with windows PCs
7. Replacement parts
8. Is it serviceable
9. Life expectancy

Switched I have been looking at

Edge-Core E 7712-32X-O-AC-B ($350 on eBay)


MikroTik CRS520-4XS-16XQ-RM ($2,200)

Mellanox Spectrum SN2100 ($1,500 eBay)

FS N8500-48B6C 25G SDN ($2500 eBay)

Any suggestions or advice would be great.
 

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,555
1,004
113
Edge-Core E 7712-32X-O-AC-B ($350 on eBay)
Potentially suffers from AVR54 Atom C2000 bug, not a good OS available

You might want to look at Mellanox SN2410 (~$1000) or SN2700 (also sometimes ~$1000), SN2100 (currently ~$1000, potentially suffers from AVR54 Atom C2000 bug)
 
  • Like
Reactions: CoachOG2U and nexox

CoachOG2U

New Member
Jul 1, 2024
9
0
1
Potentially suffers from AVR54 Atom C2000 bug, not a good OS available

You might want to look at Mellanox SN2410 (~$1000) or SN2700 (also sometimes ~$1000), SN2100 (currently ~$1000, potentially suffers from AVR54 Atom C2000 bug)
What OS do you suggest with the Mellanox switches?
Is there a set up guide that you know of, where I could see the way to configure it?
 

Matta

Member
Oct 16, 2022
60
15
8
Kinda vague on crucial points:
1. Do you need new with warranty device ?
2. Do you need 4, 4+, 16 or 32 ports ?
3. How much money you can invest (because you're going from $350 to $2500) ?

Of all mentioned devices only Mikrotik is brand new with 2-year warranty (I'm not sure if there's valid warranty on used devices from eBay)
Also, Mikrotik has strong enough CPU to act as a proper router, if you need firewall, BGPs, etc.
 
  • Like
Reactions: CoachOG2U

CoachOG2U

New Member
Jul 1, 2024
9
0
1
Kinda vague on crucial points:
1. Do you need new with warranty device ?
2. Do you need 4, 4+, 16 or 32 ports ?
3. How much money you can invest (because you're going from $350 to $2500) ?

Of all mentioned devices only Mikrotik is brand new with 2-year warranty (I'm not sure if there's valid warranty on used devices from eBay)
Also, Mikrotik has strong enough CPU to act as a proper router, if you need firewall, BGPs, etc.
1. Do not need a warranty, just need to have a good expectation of how long it will last.
2. 4 100G ports is the main, 16 can work but 32 ports is ideal as well.
3. 3-4k for just the switch
That is good to know about Mikrotik, that is the direction I want to head. With an L3 switch with a good processor.
 

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,555
1,004
113
2. 4 100G ports is the main, 16 can work but 32 ports is ideal as well.
SN2700 is probably the best choice then. I recommend Cumulus, but Onyx / MLNX-OS is also good.

You shouldn't do any serious firewall or L4 stuff on the switch. Get a separate x86 box for that (with e.g. VyOS or Opnsense).

The SN2700 can do L3 (including inter-VLAN) in Wirespeed (as all enterprise switches) and also offers "firewalling" AKA ACLs in wirespeed on the switch.
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
SN2700 is probably the best choice then. I recommend Cumulus, but Onyx / MLNX-OS is also good.

You shouldn't do any serious firewall or L4 stuff on the switch. Get a separate x86 box for that (with e.g. VyOS or Opnsense).

The SN2700 can do L3 (including inter-VLAN) in Wirespeed (as all enterprise switches) and also offers "firewalling" AKA ACLs in wirespeed on the switch.
Used or new? What is a good used price?
Are there different Models of the SN2700?
Do I need a license for these OS
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
SN2700 is probably the best choice then. I recommend Cumulus, but Onyx / MLNX-OS is also good.

You shouldn't do any serious firewall or L4 stuff on the switch. Get a separate x86 box for that (with e.g. VyOS or Opnsense).

The SN2700 can do L3 (including inter-VLAN) in Wirespeed (as all enterprise switches) and also offers "firewalling" AKA ACLs in wirespeed on the switch.
I have done a deep dive into your recommendations. The SN2700 seems to be the winner.I just have one question, why choose the SN2700 over the MikroTik CRS520-4XS-16XQ-RM ?
 

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,555
1,004
113
why choose the SN2700 over the MikroTik CRS520-4XS-16XQ-RM ?
SN2700 has proper (stable, field tested, feature rich) network operating systems (Onyx, Cumulus) and not the sad attempt of a NOS like RouterOS

double the ports, lower power consumption per Port
 
  • Like
Reactions: pimposh

rootpeer

Member
Oct 19, 2019
80
17
8
What is your application?

Are you sure you will be able to saturate the links? I had to go to NFSoRDMA to be able to come close to saturating 40GbE.
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
SN2700 has proper (stable, field tested, feature rich) network operating systems (Onyx, Cumulus) and not the sad attempt of a NOS like RouterOS

double the ports, lower power consumption per Port
I appreciate the responses. That is what I was looking for. That tip seems to be missing in the video review.
Does it matter if I buy it new, or buy a used ONIE version and install Cumulus afterward?

What do you do for work? How did you attain all of this knowledge? Any recommendations on where I can start learning?
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
What is your application?

Are you sure you will be able to saturate the links? I had to go to NFSoRDMA to be able to come close to saturating 40GbE.
Not 100% sure.
I am running ASUS Hyper M.2 x16 Gen5 Card with 4 4TB WD 850x in a Threadripper 7960x setup
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
What is your application?

Are you sure you will be able to saturate the links? I had to go to NFSoRDMA to be able to come close to saturating 40GbE.
I want a Local file server with fast transfer speeds.
We have a 12TB Dropbox and I am trying to build a File Server and Network that will be a descent backup of Dropbox fails.
We have had massive sync issues and it takes days to set up a new account with local copies.
I am not able to replicate Dropbox or else I would just get rid of it, however believe I can create a descent alternative if things go south.

As for the NFSoRDMA, I would love to see/know how you did that.
My networking knowledge is a 1 of 10, so it has been a big learning experience.
I combat this with a 9 of 10 will to make it happen.
I really love this stuff and think it is investing in the future of our company.
 

anthros

New Member
Dec 16, 2021
17
4
3
Portland, OR USA
Your internet connection (and your Dropbox account) must have extraordinary bandwidth if you need a 100Gb LAN to adequately connect a failover mirror.

In all seriousness, it sounds like 100Gb networking is just for funsies here. Is that fair? If so, there’s nothing wrong with that. If not, why do you need such fast access to data you’re currently keeping on Dropbox?

If you’re as new to networking as you imply, you should be aware that you’ll probably want to implement RoCE/RDMA on the server you’re building, the clients that will connect to it and the switch itself. Without that, you’re likely to see real-world transfer rates that are only a fraction of 100 Gb/s.

I’ve struggled with implementing a small, 25Gb RoCEv2 network for our Linux compute cluster, and I’m fairly experienced with IP networking. Mellanox/NVIDIA’s documentation is often sparse and, when you find it, quite terse. You might want to read up on how to implement RoCE before you commit to doing so. At least, that’s my take.
 

jode

Member
Jul 27, 2021
35
22
8
You might want to read up on how to implement RoCE before you commit to doing so. At least, that’s my take.


 

anthros

New Member
Dec 16, 2021
17
4
3
Portland, OR USA
Yep! Those links all exist. Are you pointing them out for the OP’s benefit? If so, cheers!

If you’re pointing them out at least partly for my benefit, please direct me to the part that thoroughly documents the host chaining feature.
 

CoachOG2U

New Member
Jul 1, 2024
9
0
1
Your internet connection (and your Dropbox account) must have extraordinary bandwidth if you need a 100Gb LAN to adequately connect a failover mirror.

In all seriousness, it sounds like 100Gb networking is just for funsies here. Is that fair? If so, there’s nothing wrong with that. If not, why do you need such fast access to data you’re currently keeping on Dropbox?

If you’re as new to networking as you imply, you should be aware that you’ll probably want to implement RoCE/RDMA on the server you’re building, the clients that will connect to it and the switch itself. Without that, you’re likely to see real-world transfer rates that are only a fraction of 100 Gb/s.

I’ve struggled with implementing a small, 25Gb RoCEv2 network for our Linux compute cluster, and I’m fairly experienced with IP networking. Mellanox/NVIDIA’s documentation is often sparse and, when you find it, quite terse. You might want to read up on how to implement RoCE before you commit to doing so. At least, that’s my take.
Thank you for your recomendation and thoughtful response.
Admittedly, 100Gbe is a want and not a need. When I started this journey, it was with 40Gbe and decided for the price, I may as well go 100Gbe.
We currently have a 1G ethernet connection, and as our company grows, it seems to take longer and longer to do things.
We do enough backups and file transfers that it feels like we are constantly waiting on files to transfer.
Long story short- Cloud storage is slow, and I want fast local file transfers. There are other limitations that a local network will solve as well, (I don't trust them with my data) but that is the main one. I have the resources to do it.

That is a Pro Tip about RoCE/RDMA!

I completely agree with you about the documentation. I feel like they expect me to already know everything. I have watched the same 4-5 videos about setting up a 100G network that I have memorized the jokes. There is so much in those videos that they do without explaining. It is not plug and play, and the people setting them up have forgotten more than I know about the subject matter. Even a simple windows file share has me tripped up. However, I will not let this go, I am going to build a 100Gbe network and transfer files FAF!!!

My short term goal was to have a 16TB file server that 5 people can have 100Gbe transfers too. I am still not sure the best way to go about this. I originally thought it was as easy as using a windows PC, with 6 Asus Hyper cards, 6 100G nic cards, and set up a simple file share with RDMA. I have built many PC and know the hardware aspect. However, I was not that simple, so I started trying to learn Linux so I can set up this file server using Proxmox. I found out how intense this is, (Not afraid of the work, more so the knowledge to maintain and service) and decided to get a Windows 2022 server license since I am familiar with.
In your opinion, will it be simpler to set up my file server with Windows server 2022 vs Linux with Proxmox?

ChatGPT has been surprisingly helpful with my questions.
 

donedeal19

Member
Jul 10, 2013
47
17
8
Get a server that has multiple pcie lane’s, fast memory channels and fast cpu cores. Install two cx5 and a cx3 all dual ports. The rest of the pcie lanes will be for nvme ssd or hba.

Four runs of optics maybe 20 meters per run. 40gbe to the main switch. Directly running from server to the client. Server will need a lot of nvme and CPU speeds for the clients.
iperf on the server cx5 in an loop and see if it’s closer to 200gbe if not tune the bios settings and performance mode to max, same thing with the rest. Then you can iperf to your clients. This will save you money and get to see if it’s worth continuing down this path.

YouTube is not going to be any help because they don’t know anything about networking hardware. Read the white paper documentation on the hardware from the industry like Dell, HP, ect and see what setups they used to get performance results. Depending on a lot of factors and conditions you might get a bandwidth around 68-91gbe per client. This also depends on how deep your pockets can go.

I tried to make this short story.
 
  • Like
Reactions: CoachOG2U