Chelsio T6 100Gbe (74.99$) and 25 Gbe NICs (29.99$); US but offers international shipping

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

anewsome

Active Member
Mar 15, 2024
125
125
43
Even if you're not using Ceph. How about a more common use case that everyone uses, live migration of VMs from host to host.

The first time you do live migration on high speed network, like 25, 40, 100g. You'll be hooked. You'll never want to do VM migration over gigabit networks again.
 

MountainBofh

Beating my users into submission
Mar 9, 2024
342
246
43
Even if you're not using Ceph. How about a more common use case that everyone uses, live migration of VMs from host to host.

The first time you do live migration on high speed network, like 25, 40, 100g. You'll be hooked. You'll never want to do VM migration over gigabit networks again.
I'd be curious to see what the difference in migration time would be between 10g and 100g. Our current work pool is XCP-NG on 10g links. Migrating VM's isn't horribly slow, but I'm wondering how much faster it would go on 100g.
 

Jaket

Active Member
Jan 4, 2017
261
150
43
Seattle, New York
purevoltage.com
We use a lot of 100G ports for all sorts of things from customers servers pushing 100Gbps, VM's is a massive thing most we have are doing 2x40g but we have been slowly upgrading some racks into 100G the migration times of moving customers and VM's to other nodes is quite amazing. Even more so when using U.3 nvme's.

With all the performance differences and what not an example of the speed difference is something like this.

100GB VM - 10Gbps = 94 seconds - 40Gbps = 23.5 seconds - 100Gbps = 9.4 seconds

1TB VM - 10Gbps = 15+ minutes - 40Gbps = 4 minutes - 100Gbps = 1.5 minutes

Of course it depends on how much activity is currently going on but it's quite amazing to see how well this works.
Most of our customer facing VM's we still do 40G and some newer nodes upgraded for 100G it's mostly the big boy customers who are doing more customized clusters who pay extra for 100Gbps interlinks.

This is mostly our use case which is in the data center space mostly. I would love to play around with more of it at my house since I have a bunch of 100g equipment here sitting at the moment. Just makes more sense to send it to our data centers and get customers online with it all.

DDOS protection is another thing we have been working on. Getting some of these cards for such a low price is interesting, drivers for the cards is a big factor along with getting the proper brackets to mount these into systems for us as it's not for home use.

Might have to pick up a pair of these to play around with eitherway.
 
  • Like
Reactions: jason879

Civiloid

Member
Jan 15, 2024
89
59
18
Switzerland
What are some use cases for 100GbE?
Taking into account the prices of 100G switches, it becomes even more feasible for Homelab. E.x. you can get Celestica DX010 for <400$ (32x100), QSFP28 CWDM4 modules costs <5$ per module, etc. Imagine having a NAS that can easily saturate a 100G network and serve all your stuff almost as fast as local storage.

And as my use case goes - testing NICs and writing some low-level DPDK code.
 

jason879

Member
Feb 28, 2016
45
52
18
43
anyone knows if they are compatible with by ESXI 8? I see it's listed as compatible with ESXI 7U3.
 

Koop

Well-Known Member
Jan 24, 2024
369
267
63
Wow. Thanks, never heard of it til now.
I'd recommend reading up on RDMA over Converged Ethernet and infiniband in general. All good stuff. My little corner of the IT world was focused on distributed storage systems with a smidge of HPC. You'll quickly see how we got to the point of massive GPU clusters like in the latest video with @Patrick visiting xAI.
 
  • Like
Reactions: Civiloid and nexox

Civiloid

Member
Jan 15, 2024
89
59
18
Switzerland
  • Like
Reactions: gb00s