10Gb copper, Is it going to be available for home usage (sub $100 switches ) soon?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Railgun

Active Member
Jul 28, 2018
148
56
28
while some people here managed running 10Gig over cat5e; I have not managed to get more than ~500MB/s (nvme -to- arc2 cache). Just changing it to cat6 cable made it run at 880MB/s (cat7 cable actually got me 1240MB/s). Still not great... bad exp with copper for me. (I didn't cheap out on the cables either.)

Fiber got me solid 1200MB/s on same system/setup.
I used it with QNAP QSW-M408-4C, and i did try to tune things.
There are some considerations. Length, proximity to other things (power and the like, especially if running in parallel) and the terminations themselves. For me, it’s cabling that was installed throughout the house during its build. I don’t have an exact distance, but it’s approximately 30 ft with a largish margin for error there. I’ve gotten a full GB of throughput when my NAS works the way it should. It’s an Aquantia 107 NIC onboard Asus Zenith II Extreme with all Unifi switches in the middle and a FS copper SFP towards the PC. Fiber between the server and switches.

That's correct. Most of the companies are actually moving to provide better wifi and stop investing on the lan connection. Wifi also has better security solutions.
Based on what? Every new office I’ve been a part of will do both. Wired security is no different than wireless in that you have centralized controls over the clients that connect. The method by which they connect doesn’t matter. But as this is a home solution, that’s neither here nor there.
 

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
There are some considerations. Length, proximity to other things (power and the like, especially if running in parallel) and the terminations themselves. For me, it’s cabling that was installed throughout the house during its build. I don’t have an exact distance, but it’s approximately 30 ft with a largish margin for error there. I’ve gotten a full GB of throughput when my NAS works the way it should. It’s an Aquantia 107 NIC onboard Asus Zenith II Extreme with all Unifi switches in the middle and a FS copper SFP towards the PC. Fiber between the server and switches.

Cables were literally like 5ft in the rack.
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
while some people here managed running 10Gig over cat5e; I have not managed to get more than ~500MB/s (nvme -to- arc2 cache).
What kind of cat 5e cable did you use? utp? stp? sftp? other?
(people saying that fiber is complicated forget that there are a bunch of different standards for twisted pair copper too :D)
 

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
What kind of cat 5e cable did you use? utp? stp? sftp? other?
(people saying that fiber is complicated forget that there are a bunch of different standards for twisted pair copper too :D)
Cat5e Monoprice UTP (350MHz)

other cables i used:
Cat6 from Cable Matters, GWFIBER, Ultra Clarity Cables
Cat7 Cable Matters, Monoprice

so far cable matters produced best cables in my opinion; tho its hard to say they reach proper standards. Cat cables are like shot in the dark. (I ended up using cat7 for 10Gig for copper.)

Fiber is much easier, but has additional costs of sfp's.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
I was doing 10 on some generic neon green spool of cat5e cable I had and no-name RJ45 connectors. I think I got lucky.

That's correct. Most of the companies are actually moving to provide better wifi and stop investing on the lan connection. Wifi also has better security solutions.
But if you want to invest in WiFi, you still need the wired backhaul. You can't have faster WiFi without faster Ethernet.
 

Bert

Well-Known Member
Mar 31, 2018
841
392
63
45
while some people here managed running 10Gig over cat5e; I have not managed to get more than ~500MB/s (nvme -to- arc2 cache). Just changing it to cat6 cable made it run at 880MB/s (cat7 cable actually got me 1240MB/s). Still not great... bad exp with copper for me. (I didn't cheap out on the cables either.)

Fiber got me solid 1200MB/s on same system/setup.
I used it with QNAP QSW-M408-4C, and i did try to tune things.
I think distance and turns make a big difference on copper performance. There are rules around diameter of turns etc. Also I wonder if people are measuring the actual speeds by using iperf when they observe 10g working on cat5e.
 

jdnz

Member
Apr 29, 2021
81
21
8
I think distance and turns make a big difference on copper performance. There are rules around diameter of turns etc. Also I wonder if people are measuring the actual speeds by using iperf when they observe 10g working on cat5e.
cat5e is officially rated for 10gbe up to 45m ( most people play it safe and stay under 30m ) - cat6a is good for 100m, but unless you've got a really big house it's almost certainly overkill

as with all links I'd expect people to best testing links to verify they're working to spec - it's easy on any cable system to have a dodgy termination and massive packet loss
 

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
I think distance and turns make a big difference on copper performance. There are rules around diameter of turns etc. Also I wonder if people are measuring the actual speeds by using iperf when they observe 10g working on cat5e.
iperf is nice, but isn't 'actual speed'. Overhead and other things are at play in real world.
In my case once i verify i can locally move from one disk to other, at speeds well beyond 10Gig; and tested multiple cables getting different results.
I used fancy linklq cable tester at work - and none of the purchased cat cables actually qualified for their rated Hz, and performance at no more than 5ft of length. After taking apart few of them, and comparing them 2 of them weren't even a twister pair..., one had too small separator (cable matters cat6), while one lacked it completely too. At 1-5Gig most are just fine, but going to 10gig i had no luck.
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
I think distance and turns make a big difference on copper performance. There are rules around diameter of turns etc. Also I wonder if people are measuring the actual speeds by using iperf when they observe 10g working on cat5e.
Yeah, I get ~9.8Gb/sec over iperf3 between my FreeNAS and desktop and other servers.

My file copy speeds are just about as good too:
1673649178335.png

Goes both ways:
1673649280194.png

All over cat5e

-- Dave
 
  • Like
Reactions: CyklonDX

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
This is nice

(10Gig should top out at 1250 MB/s, on fiber i can top out at 1200MB/s)
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
This is nice

(10Gig should top out at 1250 MB/s, on fiber i can top out at 1200MB/s)
That's assuming 100% data with no overhead. In practice, you'll never see that. But 1.2GByte/s is excellent. I'd be surprised if you could get much higher than that on 10G, you're at 96% throughput.
 

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
overhead would be fine in 50-100MB i'll bite that; but 200MB/s not so much - especially since this is local network.
 

bitbckt

will google compiler errors for scotch
Feb 22, 2022
213
134
43
That's not how that works; "local" has very little to do with connection overhead.

10Gbit is the maximum throughput of the L1 link. Every layer of the stack from there up to the application will add at _least_ some framing overhead, not to mention "superfluous" traffic with no application data like TCP handshakes, keepalive packets, TLS session negotiation, &c. Achieving 96% link utilization _is_ excellent; achieving 100% is not possible.
 

CyklonDX

Well-Known Member
Nov 8, 2022
846
279
63
in local network with potentially 1 switch (both hosts sitting on same switch.)
All those extras going to be really far below 5MB/s. Literally all on hosts to snd/rcv.

(You can also top-out 'over' link speed if you use compression algo, only on local network with switch in between that supports it.)
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
At 9k MTU, and best case scenario overhead (90 bytes per packet) loss will be at least 12 MBytes/s. That's L1/L2 frame, TCP/IP, and SMB headers. 10Gbit at 9k MTU works out to ~138889 packets/sec. At 1500 MTU, it'd be 833333 packets/s and a minimum overhead of ~71.5 MBytes/s. Both of those assume maximum transmission rate.

So in a perfect world, SMB over a 10 Gbit link can hit 99% throughput. Reality being what it is, 96% is really good.
 

bitbckt

will google compiler errors for scotch
Feb 22, 2022
213
134
43
At 9k MTU, and best case scenario overhead (90 bytes per packet) loss will be at least 12 MBytes/s. That's L1/L2 frame, TCP/IP, and SMB headers. 10Gbit at 9k MTU works out to ~138889 packets/sec. At 1500 MTU, it'd be 833333 packets/s and a minimum overhead of ~71.5 MBytes/s. Both of those assume maximum transmission rate.

So in a perfect world, SMB over a 10 Gbit link can hit 99% throughput. Reality being what it is, 96% is really good.
All true, though I don't like to make assumptions about the capabilities of randos on the internet and their environments.

Saturating 10Gbit (even over copper) just isn't a hard problem in 2023. That said, reaching 100% utilization - as seems to be an expectation here - isn't reality. Shrug.
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
Oh agreed; I was just doing the bare minimum math to show that the 5 MBytes/s estimate for overhead was low by a factor of two, at a minimum. TCP packets can have a header up to 60 bytes and SMB has a variable parameter block that comes after the fixed 32 byte header. I used the bare minimum TCP/IP header sizes and assumed 0 bytes in the SMB parameter block, just to get the absolute best case scenario numbers.
 
Last edited: