Has anyone else "downgraded" from QSFP+ (40GbE) to SFP+ (10GbE) gear? And if so, why?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Wondering if anyone else has done this, and if so, how'd it go? I've got all these QSFP+ cards direct-connected right now, and have spent countless hours looking at different QSFP+ switches, but they're all super loud, super expensive, and any that are within my budget have major caveats.

If I "downgraded" to SFP+ fabric, I could get a new switch that's quiet or even passively cooled

Is there a relatively inexpensive 10gb NIC that supports RoCE v2 in ESXi 7? Does such a thing exist?

I have my eye on the MCX312B-XCCT because ConnectX-3s are obtainable for under $50 on fleabay, but I think they only support RoCE v1 - would be nice to find a more modern card. Besides, I'm sure the whole CX3 line is next on the chopping block as far as trimming driver support for VMware (would be surprised if they're supported in vSphere 8 at all)

The Chelsio T520 sounds promising, but a little on the expensive side for me after getting all my CX3s for like $35/ea. I find the There's the Marvell/QLogic QL41000 series ... meh ... too many options ...
 

altmind

Active Member
Sep 23, 2018
285
101
43
Last edited:
  • Like
Reactions: AveryFreeman

mach3.2

Active Member
Feb 7, 2022
128
84
28
I have my eye on the MCX312B-XCCT because ConnectX-3s are obtainable for under $50 on fleabay, but I think they only support RoCE v1 - would be nice to find a more modern card. Besides, I'm sure the whole CX3 line is next on the chopping block as far as trimming driver support for VMware (would be surprised if they're supported in vSphere 8 at all)
iirc ConnextX-3 Pro cards support RoCE v2, but it's worth noting the mlx4 driver stack is not present in ESXi 8, so no more driver support for cards older than ConnectX-4.

These cards are really on their last legs (~3 years left) if you're still rolling with ESXi.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
We use some of these modules on our qsfp network cards with fiber Cisco CVR-QSFP-SFP10G Compatible 40G QSFP+ to 10G SFP+ Converter Module - FS

or you can use qsfp breakout cable(to 4xSFP+). you can even aggregate/lagg these back on a switch. may be even cheaper. 2m Generic 40G QSFP+ DAC Breakout Cable 30AWG - FS not sure that your nic support 40gig breakout though, switches usually DO support.
Oh yeah, I read the CX3 doesn't support it, but at least I could use 10GbE switch without having to get all new cards. If I can find some of those adapters that aren't twice the cost of my NICs! lol
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
iirc ConnextX-3 Pro cards support RoCE v2, but it's worth noting the mlx4 driver stack is not present in ESXi 8, so no more driver support for cards older than ConnectX-4.

These cards are really on their last legs (~3 years left) if you're still rolling with ESXi.
I'm very seriously weighing moving over to QL45212, any experience with those?
 

mach3.2

Active Member
Feb 7, 2022
128
84
28
Oh yeah, I read the CX3 doesn't support it, but at least I could use 10GbE switch without having to get all new cards. If I can find some of those adapters that aren't twice the cost of my NICs! lol

$23 each, but not what I'll call sensible use of money either since converting both ports will basically bring the cost to ~$81(23+23+35).

Might as buy a new NIC that's supported in ESXi 8.

I'm very seriously weighing moving over to QL45212, any experience with those?
Nope, sorry.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Is there a relatively inexpensive 10gb NIC that supports RoCE v2 in ESXi 7?
It depends on your meaning of "inexpensive" these days. There are connect-x 4 (sometimes 5) on ebay for singe port cards for under 300$.
but they're all super loud, super expensive, and any that are within my budget have major caveats
And again "expensive" denpends on your meaning. I checked yesterday ebay for arista 40GBE switches (7050qx-32s) and there were 3 listings for under 1000$. These switches still get eos updates and the fans can be silenced with a coomand by running them at 30%.
And what your buddget btw? :D

Edit: Fixed typo in arista model
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
It depends on your meaning of "inexpensive" these days. There are connect-x 4 (sometimes 5) on ebay for singe port cards for under 300$.

And again "expensive" denpends on your meaning. I checked yesterday ebay for arista 40GBE switches (7050qx-32s) and there were 3 listings for under 1000$. These switches still get eos updates and the fans can be silenced with a coomand by running them at 30%.
And what your buddget btw? :D

Edit: Fixed typo in arista model
Yeah, all of this is making "downgrading" look all that much more attractive ...
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
Not quite what you asked, but I've had icx6610p proving 10gigs for my DIY 3 node vsan cluster, but since I moved all compute and storage to a single device, high-speed networking was no longer required and I've moved back to a 1gig network. I consolidated a whole heap of hardware into a single lower-powered box and saved over $70/month on the electric bill.
 
  • Like
Reactions: AveryFreeman

Joshh

Member
Feb 28, 2017
61
16
8
43
I did, I had an Arista 40gbe and 40 to all my servers. It was cool for benchmarks and stuff but I didn't really need it and the power consumption is high. I still have most of my gear that I just need to offload to eBay.
 

Freebsd1976

Active Member
Feb 23, 2018
387
73
28
I did too, had mellanox sx6012 sn2100 sn2010 and sn2410, but storage for 40G/100G too expensive , now use tplink st5008 as 10G switch ,it is fanless and low power consumption

I'm very seriously weighing moving over to QL45212, any experience with those?
don't use it on windows server 2019 , BSOD, drivers issuse
 

Joshh

Member
Feb 28, 2017
61
16
8
43
I did too, had mellanox sx6012 sn2100 sn2010 and sn2410, but storage for 40G/100G too expensive , now use tplink st5008 as 10G switch ,it is fanless and low power consumption


don't use it on windows server 2019 , BSOD, drivers issuse
What do you mean storage for 40g / 100g is expensive? Are you just referring to storage that could saturate that type of a connection?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I did too, had mellanox sx6012 sn2100 sn2010 and sn2410, but storage for 40G/100G too expensive , now use tplink st5008 as 10G switch ,it is fanless and low power consumption
So you selling all those switches cheaply to the community now?;) I could use another SN2100 :p
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
don't use it on windows server 2019 , BSOD, drivers issuse
Thanks for the heads up. I've wrangled hyper-v a few times with mixed success. Not a fan. Hopefully won't have any issues with vSphere 7-ish.

Just built a micro-PC cluster to try Harvester 1.1.0, had it on there for about a week. Decided wasn't ready for prime-time, went back to vSphere (sad). Learning the hard way there's a reason VMware has like 90% of the market share.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
What do you mean storage for 40g / 100g is expensive? Are you just referring to storage that could saturate that type of a connection?
I couldn't quite "saturate" my 40GbE with any of my storage, but an M.2 or U.2 NVMe got me sustained speeds of 20Gbps+ direct connected without any special tuning via vDS (distributed switch). Did the quick math, that's about 2,500MB/s.

40GbE + might be useful for things like connecting machines with PCIe 4.0 NVMe or NVMe RAID setups. Beyond that, kinda overkill. On an all internal network my VMs could only hit 26-29Gbps to one another via iperf3 using VMXNET3 anyway. Not sure if that's a limitation of the linux kernel, or the X10SRL-F w/ E5-2650v4 I was using, I guess it's possible some MPPs-centric software like DPDK might be able to break that barrier, but haven't explored.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
I couldn't quite "saturate" my 40GbE with any of my storage,
Is that the goal? To have storage that is as fast (or faster) than the network?
I'm using 40gbe because I can saturate a 10gbe link with single iodrives from 2012 (or even older) and sequential reads/writes of a large hdd raid.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Is that the goal? To have storage that is as fast (or faster) than the network?
I'm using 40gbe because I can saturate a 10gbe link with single iodrives from 2012 (or even older) and sequential reads/writes of a large hdd raid.
I was just answering https://forums.servethehome.com/ind...fp-10gbe-gear-and-if-so-why.38009/post-353965

Yeah, 10GbE's not hard to saturate, I agree. I think 25GbE is going to be just fine, though. Seems like the sweet spot for lab rats currently - perfectly suitable, and no shoehorning old QSFP+ switches with irritating firmware hacks, replacing fans, etc.

If people are partial to Mellanox cards, I've seen the CX4LX as low as $140 now. It's not $35 like a CX3, but CX3s are on their way out in vSphere 8.
 
  • Like
Reactions: mach3.2