How Bad are Chipset/PCH Lanes?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mattventura

Active Member
Nov 9, 2022
447
217
43
I'm currently running a 40GbE x8 NIC in my desktop (MCX354A-FCBT), but it's getting quite ancient and I'd like to upgrade. Problem is, DDR5 HEDT stuff isn't out yet, and consumer chips don't have many PCIe lanes (either 24 on AM5, or 20 for Intel). So the question is, would using chipset lanes noticeably impact networking performance? My use case would mostly just be network storage (SMB client). Surprisingly, there's very few benchmarks of this, so I'm wondering if this is actually a legitimate worry and I should wait for HEDT (or even full-blown workstation) stuff, or if I should just buy a non-HEDT system now and use PCH lanes.
 

andrewbedia

Well-Known Member
Jan 11, 2013
701
260
63
...why not use CPU lanes? both AMD and Intel have built-in graphics options for DDR5 stuff
 

mattventura

Active Member
Nov 9, 2022
447
217
43
It's for a desktop with a dedicated GPU. Splitting the main x16 into x8+x8 is also an option, but I know that can have a noticeable impact on some applications.
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
So the question is, would using chipset lanes noticeably impact networking performance? My use case would mostly just be network storage (SMB client).
(I don't have benchmarks)
I think that as long as you are not saturating/or trying to saturate that 40GBE link via ramdisk it won't be noticeable in every day usage like copying occasionally a uhd disc or the newest rocky linux iso or photos/office files
Surprisingly, there's very few benchmarks of this, so I'm wondering if this is actually a legitimate worry and I should wait for HEDT (or even full-blown workstation) stuff, or if I should just buy a non-HEDT system now and use PCH lanes.
Do it like me and get a threadripper pro/epyc system and forget thinking about pcie lanes for a while :D
(It's expensive though)
 

jdnz

Member
Apr 29, 2021
81
21
8
how fast is the network storage you're acessing? spinning rust arrays or nvme arrays?
 

CyklonDX

Well-Known Member
Nov 8, 2022
835
273
63
going by chipset would have more interrupts, and additional overhead thus your latency would raise by clock rate of your chipset.
(if you would be moving a lot of data - a decent cooling on chipset would be desirable. )

the total bandwidth would still be on how many lanes your chipset provided to the device.
 

zir_blazer

Active Member
Dec 5, 2016
357
128
43
On LGA 1700, Intel is using PCIe 4.0 for 8 Lanes for DMI between Chipset and Processor, whereas AMD on AM5 is using just 4, so you will most likely be bottlenecked on AM5 if you place it on Chipset lanes.
Note than while from the point of view of the Processor plugging the NIC directly to it is of course lower latency, I have no idea whenever there is P2P (Peer-to-Peer) going on between Chipset devices (PCI Express supports that) so that you can do something like, sending data coming from the NIC to a SATA disk plugged into the Chipset SATA Controller, in which case maybe being on Chipset lanes would be lower latency assuming it hasn't to go all the way to the Processor and back.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
how fast is the network storage you're acessing? spinning rust arrays or nvme arrays?
Somewhere in-between. A few spinning rusts with some SAS3 SSDs as cache, plus RAM cache. Hard to test what the theoretical max would be when my old desktop is currently the bottleneck (even an iperf will get bottlenecked slightly), but I can get over 2GB/s in ideal circumstances.