I have a weird 10GB nic situation - no matter what I try windows 10 file transfers are stuck at around 80MB/s.
Here are the specs:
8600k
Asus z390 Aurus pro wifi
16GB RAM
ADATA SX8200PNP 1T NVME SSD
Asus XG-C100C 10 G Nic
The NAS on the network.
SYNOLOGY 1819+
with a 10G Mellanox ConnectX-2
Switches:
Netgear GS110EMX
Mikrotik CRS305-1G-4S+IN
I've tested different cables (I'm using Cat6a), I've routed different ways through the switches. I've changed motherboards from ROG strix z370g- gaming (that setup had the same problem) - I've tried different PCIe slots. This is what I can see at the moment: I've also swapped out the Nic with a x550 T1 (Same result)
The weird thing is that I've used both 10G Nics in another rig and they both work perfectly - over 500MB/s - same NAS, consistently between different routes through the switches. Same NIC properties etc. Only real difference - it's a 9900k with two NVME drives.(I thought this would take up the PCIe lanes but it seems fine.)
So I'm a little confused about why the other rig is throttling so hard. Here is the iperf3 info
I've tried everything I can. I'm a bit stumped - I can only assume the cpu is the only thing that's the only consistent element that could be a problem. Or is it the NVME using up the extra chipset PCIe lanes?
Hopefully someone on here has a few ideas about what's happening?
Here are the specs:
8600k
Asus z390 Aurus pro wifi
16GB RAM
ADATA SX8200PNP 1T NVME SSD
Asus XG-C100C 10 G Nic
The NAS on the network.
SYNOLOGY 1819+
with a 10G Mellanox ConnectX-2
Switches:
Netgear GS110EMX
Mikrotik CRS305-1G-4S+IN
I've tested different cables (I'm using Cat6a), I've routed different ways through the switches. I've changed motherboards from ROG strix z370g- gaming (that setup had the same problem) - I've tried different PCIe slots. This is what I can see at the moment: I've also swapped out the Nic with a x550 T1 (Same result)
The weird thing is that I've used both 10G Nics in another rig and they both work perfectly - over 500MB/s - same NAS, consistently between different routes through the switches. Same NIC properties etc. Only real difference - it's a 9900k with two NVME drives.(I thought this would take up the PCIe lanes but it seems fine.)
So I'm a little confused about why the other rig is throttling so hard. Here is the iperf3 info
I've tried everything I can. I'm a bit stumped - I can only assume the cpu is the only thing that's the only consistent element that could be a problem. Or is it the NVME using up the extra chipset PCIe lanes?
Hopefully someone on here has a few ideas about what's happening?