Can Windows 11 do 10 Gbps?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
I am new to the world of fiber networks, so I thought I'd dip my toes in cautiously to start with a small investment before diving into switches.

My goal, directly connect two machines via 10 G Fiber and get full bandwidth.
In short, I have partially succeeded as it is a bit lack luster and a little disappointing. So perhaps I just need my expectations checked, or perhaps something is wrong (i.e. I should have spent more money).

Machine A: Windows 11 latest, Intel i9-9900K, 32GB RAM, ASUS ROG Maximus Hero XI, card installed in the bottom x16 slot (Chipset z390). NVME read/write are 2 GB/s or better.
Machine B: Windows 11 latest, Intel i9-13900K, 64 GB RAM, ASUS ROG Maximus Z790 Hero, card installed in the bottom x16 slot (Chipset z790). NVME read/write are 5 GB/s or better.
The two NICs: NICGIGA Intel 82599(X520-DA1): https://www.amazon.ca/dp/B0CM37WWXF?ref=ppx_yo2ov_dt_b_product_details&th=1
The two SFP+ modules (intel coding): https://www.amazon.ca/dp/B01CN82LP8?ref=ppx_yo2ov_dt_b_product_details&th=1
The Cable: 10m https://www.amazon.ca/dp/B01C5HHFVC?psc=1&ref=ppx_yo2ov_dt_b_product_details
Static IPs on both ends, in a subnet that can't be confused with my existing legacy NICs.

I plugged it all in, installed the wired driver package from Intel's download site: Wired_driver_28.3_x64.zip

The cards are recognized, I've got lights on the NICs showing 10G link, all good. Although Windows shows this, depending where you look. It's only been 40 years. Give them some time to get link speed reporting working.
1707784812787.png 1707784843313.png


Here's where the disappointment comes:
  • iperf3 maxes out between the machines (in either direction) at about 6.5 Gbps. I have to use -w 1024k otherwise it only gets 2.5 Gbps!
  • Turning on jumbo frames (either 4088 or 9014) doesn't really affect the iperf results.
  • A windows file explorer copy goes at about 830 MB/s or 6.6 Gbps. It's good. It's just not great. About 200 MB/s shy of what I was hoping for...
  • I was Expecting/hoping to copy files over at between 1000-1100 MB/s
I've read through as many posts as I could find, but not a lot of folks seem to have this exact issue, although many similar ones. It is usually more extremely low speed problems. Most don't ever get resolved definitively.

So the real question is: has anyone got a real life experience of getting the full 9.x Gbps bandwidth between two windows boxes, or even to just one? Is it possible? Am I wasting my time, or is this as good as it gets?

Would love to hear what others have achieved.
Thanks
 

Attachments

Tech Junky

Active Member
Oct 26, 2023
368
124
43
Likely has to do with a few different things. Disk speed / ram. Windows also has some issues with default settings causing bottlenecks in windowing.

I can get 1.5GB/s using thunderbolt between 2 machines though so, it's possible. The link speed shows 20gbps and the bottleneck would most likely be a combo of TB and pcie gen 3. With the new ASM4242 USB4 Gen 4 card it boosts closer to 40gbps but, I haven't gotten my hands on one to test and the laptop would still be using TB.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
You are likely seeing single-threaded results. With iperf3 set up multiple parallel transfers and look at the totals. You should see something close to 10gbe (9.9+). Not much you can do about file transfers as SMB implementation in Windows just sucks.
 

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
You are likely seeing single-threaded results. With iperf3 set up multiple parallel transfers and look at the totals. You should see something close to 10gbe (9.9+). Not much you can do about file transfers as SMB implementation in Windows just sucks.
I have tried with the parallel option as well.
> iperf3 -c <host> -w 1024k -P 4

It gives me a tiny bit extra up to 7 Gbps, no benefit from adding more threads. There is definitely a bottleneck, but what I wonder.
 

Tech Junky

Active Member
Oct 26, 2023
368
124
43
but what I wonder
Need more details on the systems you're using.

For me I'm using NVME Gen4 drives on both sides of the TB connection. Also, I noticed when I unleashed some extra space on the Windows partition it increased speeds when moving data around. I had it capped at 100GB with about 5GB of space free and bumped it to 125GB and speeds improved even though the data wasn't being move to/from the windows partition. It's a bit baffling how it improved the speed of the transfers other than Windows is probably caching the transfer vs just moving from one location to another. Both systems though are using TB4 but, for Ethernet I use a dongle on the laptop for 5GE and the other has a 5GE card inside as well. 10GE on a laptop doesn't make $ense as the dongle would be another $150+ when the TB cable was under $25 and exceeds 10GE.
 
  • Like
Reactions: EncryptedUsername

Tech Junky

Active Member
Oct 26, 2023
368
124
43
1707791786317.png
This is concerning even w/ the other showing 10GE. My LS for WIFI is 2.4gbps..
1707791865817.png

So, with that I can hit transfers of 1.7gbps over wifi. Anyway.... networking takes some tuning at times to get the full performance out of it. Todays adapters though can do some fun stuff like combining 2 wifi bands into a single pipe to achieve more bandwidth. On the wired side though it tends to be more straight forward unless you use NBASE-T options 2.5/5GE but even then they tend to just work if both sides match.

10GE should just work once you rule out the card / optics / cable and have enabled features on both sides. Double check things and clean them to rule out any dust or debris that could cause slower speeds.
 
  • Like
Reactions: EncryptedUsername

dandanio

Active Member
Oct 10, 2017
182
70
28
Speaking from experience: tuning cards over 1gbps is more of an art than science. :) Sometimes it feels like the butterfly effect is real. I have a 40gbps network at home and I have been tuning endpoints ever since I got it. Different PCI cards, BIOS settings, MULTIPLE QSFPs. I am even known for swapping fiber around the house for a different... brand. :) And... I am not there yet. The best I can pull through is FreeBSD to FreeBSD same switch iperf3 32.8gbps. So, since I have not reached the 40gbps, I am not an expert, I might be only further along the path than you, but I am still a seeker. I can tell you that: focus on drivers, driver settings, queue lengths, memory buffers, PCI and Interrupt assignments. irqbalance might not be the best idea, try and report. If interested, I can share what I found out in greater details.
 

jdnz

Member
Apr 29, 2021
81
21
8
there's a note on the tech specs page for that old z370 board about the bottom slot ( https://rog.asus.com/motherboards/rog-maximus/rog-maximus-x-hero-model/spec/ )

1 x PCIe 3.0 x16 (x4 mode) *1
Note
*1 The PCIe x4_3 slot shares bandwidth with PCIex1_3. The PCIe x4_3 runs x2 mode by default.

if that slot is indeed running at x2 then that's your issue - the x520/x540 are pcie2 and 2 lanes of pcie2 is about 800mbps which is about where you seem to be-

You need at least 4 lanes to get full speed on those pcie2 cards, if you're lucky there's a BIOS option to turn off the paired x1 slot and switch back to full x4 mode, if not the best option would be to ditch the x520 and shove an x550 in ( they're dirt cheap now and pcie3 so run fine with just 2 lanes of pcie3 )

obviously if one of the 'real' x16 slots is still empty move the card up there so it can get an x8 link ( gpu in the other slot will downshift to x8 but for a lot of cards they only run x8 anyway )
 

jei

Active Member
Aug 8, 2021
153
82
28
Finland
Windows 11 (attached image). 1,08GB/s over 10GBASE-T.

This is over tens of meters of residential CAT6 copper. Same results with fiber in the past.

No magic. Good hardware. Tune (basically max out) send & receive buffers on both ends of the wire.

Never tried Win11 <-> Win11 so YMMV.

Sidenote, after going to sleep state on Windows side and waking up many times, transfer rates go down and it needs a reboot.

Bonus sidenote, iperf3 is not always indicative of real world performance.
 

Attachments

  • Like
Reactions: EncryptedUsername

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
I'm moving daily Data Close to 2GByte/s from win 11 pro for Workstations to Server 2022 with Default settings.

Iperf is the wrong tool on Windows plattforms (Look at the GitHub issues for more information).
Use ntttcp instead.
 

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
obviously if one of the 'real' x16 slots is still empty move the card up there so it can get an x8 link ( gpu in the other slot will downshift to x8 but for a lot of cards they only run x8 anyway )
@jdnz for the win.

Thanks to everyone for posting thoughts and experiences on this. I didn't RTFM apparently! The shared PCI bandwidth on my Motherboard seems to have been the issue. After moving the card to the other "x16" slot that shares with the GPU, I topped out iperf3 at 9.71 Gbits. Windows file copy is now topping out at 1.05 GB/s, which seems to track with the results posted by @jei. When I enable 9014 jumbo frames, it goes up to 1.15 GB/s on copy and 9.91 on iperf3. This I can call a success.

1707828966451.png
So, PCIe bandwidth was the bottleneck. Thanks all.