Issue Summary: Inconsistent SMB Throughput on 10Gb Network (Windows Host vs Laptop vs VMs)
I’m troubleshooting a persistent throughput issue on what should be a near pro-grade 10GbE network (unamnaged switches), and I’m trying to identify whether this is a Windows SMB / Hyper-V / NIC driver interaction or a deeper architectural problem.
Environment (simplified):
Server: Supermicro H12 platform
AMD EPYC 7002/7003
Dual Broadcom BCM57416 10GBase-T NICs
Windows host running Hyper-V
this has a SSD that has all the VMs on it
this has the spinning disk with the source files
Windows 11 PC
this has the destination files, on a spinning disk
Windows laptop with USB 2.5G adapter
Windows VMs on Hyper-V (on the server)
Network: All links negotiate at 10Gb, no known 1G hops, same destination for all tests
laptop is 2.5G
The transfer test I do is the same source to destination. A network drive on the supermicro server (spinning disk) to a destination on another windows 11 machine also a spinning disk. I just operate the transfer from different machines for the test. I don't understand why the speeds vary so much.
Observed behavior:
Laptop : ~150+ MB/s (the best for some reason)
Supermicro host: ~50–70 MB/s (not bad, but so much slower than laptop, it's the host!)
Windows VM: ~20MB/s (ridiculous slow)
Key confirmations:
No cable or link negotiation issues
No 1G switch hops
Disk performance is not the bottleneck (laptop proves this)
SR-IOV is not available under Windows Hyper-V for the Broadcom NICs (SriovSupport : NotSupported)
Hyper-V external vSwitch is correctly configured
Problem persists even when the Supermicro host is the source of the data
Why this seems wrong:
If the laptop can read/write the same spinning disks at ~150 MB/s over the network, the Supermicro host (with more CPU, more RAM, and 10Gb NICs) should not be limited to ~30–60 MB/s as an SMB source.
I can't figure out why doing the transfer from the VM is so slow.
Thank you.
I’m troubleshooting a persistent throughput issue on what should be a near pro-grade 10GbE network (unamnaged switches), and I’m trying to identify whether this is a Windows SMB / Hyper-V / NIC driver interaction or a deeper architectural problem.
Environment (simplified):
Server: Supermicro H12 platform
AMD EPYC 7002/7003
Dual Broadcom BCM57416 10GBase-T NICs
Windows host running Hyper-V
this has a SSD that has all the VMs on it
this has the spinning disk with the source files
Windows 11 PC
this has the destination files, on a spinning disk
Windows laptop with USB 2.5G adapter
Windows VMs on Hyper-V (on the server)
Network: All links negotiate at 10Gb, no known 1G hops, same destination for all tests
laptop is 2.5G
The transfer test I do is the same source to destination. A network drive on the supermicro server (spinning disk) to a destination on another windows 11 machine also a spinning disk. I just operate the transfer from different machines for the test. I don't understand why the speeds vary so much.
Observed behavior:
Laptop : ~150+ MB/s (the best for some reason)
Supermicro host: ~50–70 MB/s (not bad, but so much slower than laptop, it's the host!)
Windows VM: ~20MB/s (ridiculous slow)
Key confirmations:
No cable or link negotiation issues
No 1G switch hops
Disk performance is not the bottleneck (laptop proves this)
SR-IOV is not available under Windows Hyper-V for the Broadcom NICs (SriovSupport : NotSupported)
Hyper-V external vSwitch is correctly configured
Problem persists even when the Supermicro host is the source of the data
Why this seems wrong:
If the laptop can read/write the same spinning disks at ~150 MB/s over the network, the Supermicro host (with more CPU, more RAM, and 10Gb NICs) should not be limited to ~30–60 MB/s as an SMB source.
I can't figure out why doing the transfer from the VM is so slow.
Thank you.
Last edited: