So, I got bit by the 40gb bug while looking for more bandwidth between my proxmox server and desktop. Nice thing is the cards are working and the DAC is working just fine and connection has been fun to play with, but limited in bandwidth (currently stuck at 25.5gbs).
Server: proxmox 6.x, AMD 2400g, 32gb RAM and Asrock x570 Pro4
Desktop: Windows 10, AMD 3900x, 32gb RAM and MSI x470 Carbon Gaming
Using Ethernet mode in windows drivers and a static IP on both ends
Current windows driver
Default drivers in proxmox
Started with about 11gbs connection with iperf3 and iperf going from server to desktop. (not bidirectional)
- Increased to jumbo frames 9000
- Changed driver to single port optimized
Runs iperf3 at 25.5gbs or so from server to desktop (not bidirectional)
- checked lspci on proxmox and shows x8 lanes at pcie3 (using primary x16 slot)
- checked hwinfo64 on windows, shows x8 lanes at pcie3 (using pci slot 3 with gpu in slot 1, both at x8, nvme at x4)
- windows10 vm in proxmox iperf3 to desktop was around 7gbs for default frame size in its drivers, jumbo frames at 13gbs (uses redhat virtual driver)
So questions:
1: is the ethernet mode slowing it to these speeds?
2: something obvious on the windows side I missed for configuration?
3: server or hardware/proxmox slowing down its potential in shell (not vm)?
Been reading for a couple days now, can seem to find a consistent answer to those questions.
Thanks in advance,
Phil
Server: proxmox 6.x, AMD 2400g, 32gb RAM and Asrock x570 Pro4
Desktop: Windows 10, AMD 3900x, 32gb RAM and MSI x470 Carbon Gaming
Using Ethernet mode in windows drivers and a static IP on both ends
Current windows driver
Default drivers in proxmox
Started with about 11gbs connection with iperf3 and iperf going from server to desktop. (not bidirectional)
- Increased to jumbo frames 9000
- Changed driver to single port optimized
Runs iperf3 at 25.5gbs or so from server to desktop (not bidirectional)
- checked lspci on proxmox and shows x8 lanes at pcie3 (using primary x16 slot)
- checked hwinfo64 on windows, shows x8 lanes at pcie3 (using pci slot 3 with gpu in slot 1, both at x8, nvme at x4)
- windows10 vm in proxmox iperf3 to desktop was around 7gbs for default frame size in its drivers, jumbo frames at 13gbs (uses redhat virtual driver)
So questions:
1: is the ethernet mode slowing it to these speeds?
2: something obvious on the windows side I missed for configuration?
3: server or hardware/proxmox slowing down its potential in shell (not vm)?
Been reading for a couple days now, can seem to find a consistent answer to those questions.
Thanks in advance,
Phil