PCIe lane confusion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Nicolai

Member
Sep 4, 2020
37
2
8
Hello Everyone in this fantastic forum.

I've already gotten a ton of help here, but now I've run in to another question. I'm planning on building a NAS server. I want it to run Unraid as OS, but I also want a Plex Media Server running on it and probably also a website. All fine so far, then I got the idea watching Linus Tech Tips, that I could create a Windows 10 Pro VM and use that for relaxing with a game, surfing the web, photo editing and generally just workstation for my business. That's where I ran in to the issue of PCIe lanes and how they're confusing me.

An Nvidia GTX 1080 Ti will need 16 PCIe lanes to run at x16, if I understand this correctly, then a GC-Titan Ridge or a ThunderboltEX 3-TR will need an additional 4 lanes to run at x4, putting me at 20 lanes needed, not including any HBA card I might need in the future for my plan of expanding with hot-swap bays as I can afford it. I've been reading the specs on several motherboards from Gigabyte and Asus, and they all disable PCIe slots when a M.2 NVMe is installed in a certain slot and they split the one PCIe 3.0 x16 slot actually running x16 in to x8 if a certain slot is used, etc. Then add to the fact these Thunderbolt cards are only supported on certain motherboards, which all run Intel Core, which say they have support for 16 PCIe lanes. Then rewatching the Linus Tech Tips video, he said to install his Titan Ridge in the PCIe slot that's linked to the south bridge?

Can someone please dumb this down as much as humanly possible? Crayon drawing level of dumbing down if at all possible?
 

msg7086

Active Member
May 2, 2017
423
148
43
36
On motherboard there are PCIe lanes from CPU, and lanes from chipset (SB). If CPU can only provide 16x and you plug 2 devices onto direct CPU lanes, yes either one slot is disabled, or the other one is slowed down. If you plug devices onto chipset lanes, the chipset acts like a network switch, and will multiplex your data streams into one, and then feed to CPU. It can have a 4x lanes connecting from chipset to CPU, and all devices underneath will share that 4x lanes. If you plug 8 16x cards, although they can use up to 128 lanes, they will end up sharing the bandwidth of 4x. But they should all work (basic functions would work).
 

Nicolai

Member
Sep 4, 2020
37
2
8
On motherboard there are PCIe lanes from CPU, and lanes from chipset (SB). If CPU can only provide 16x and you plug 2 devices onto direct CPU lanes, yes either one slot is disabled, or the other one is slowed down. If you plug devices onto chipset lanes, the chipset acts like a network switch, and will multiplex your data streams into one, and then feed to CPU. It can have a 4x lanes connecting from chipset to CPU, and all devices underneath will share that 4x lanes. If you plug 8 16x cards, although they can use up to 128 lanes, they will end up sharing the bandwidth of 4x. But they should all work (basic functions would work).
So how many lanes does a chipset provide then? Just 4?

Should I plug any HBA cards in the chipset PCIe or the CPU PCIe slots?

How do I know which go to the CPU and which go to the chipset?
 

msg7086

Active Member
May 2, 2017
423
148
43
36
1. Depends on the actual chipset.

For example, in the configuration of B450 with Ryzen 3000, a typical Ryzen 3000 has 24 lanes, 16 goes to one x16 slot, 4 goes to one M.2 slot, 4 goes to the chipset. So the B450 chipset has 4x PCIe 2.0 uplink and all devices attached to chipset shares that bandwidth.

2. You have to decide which is more important. If you are building a gaming PC, you would want your graphics card to be on the CPU PCIe slot. OTOH if you are building a storage box, and want to have maximum throughput from disks, you would want your HBA to get that sweet slot. Also, check if your HBA capacity. If it only has PCIe 2.0 x8, then it would be a waste to put it on a PCIe 4.0 x16.

3. Your motherboard manual should tell you which goes to where.
 

Nicolai

Member
Sep 4, 2020
37
2
8
1. Depends on the actual chipset.

For example, in the configuration of B450 with Ryzen 3000, a typical Ryzen 3000 has 24 lanes, 16 goes to one x16 slot, 4 goes to one M.2 slot, 4 goes to the chipset. So the B450 chipset has 4x PCIe 2.0 uplink and all devices attached to chipset shares that bandwidth.

2. You have to decide which is more important. If you are building a gaming PC, you would want your graphics card to be on the CPU PCIe slot. OTOH if you are building a storage box, and want to have maximum throughput from disks, you would want your HBA to get that sweet slot. Also, check if your HBA capacity. If it only has PCIe 2.0 x8, then it would be a waste to put it on a PCIe 4.0 x16.

3. Your motherboard manual should tell you which goes to where.
Plex Media Server is only supporting Intel APUs, although some people have managed to get AMD APUs to work, but I think that's a bit outside what I could hope to do.

I'm planning on getting an LSI SAS controller when that time comes, since Unraid doesn't really play nice with the other HBAs on the market.

With that, I'm going to get an Intel CPU of sorts, with integrated graphics that can be used by Plex Media Server for any transcoding that might be needed. Since I'm also planning on using it for playing games and editing photos on, I suspect a Z490 or H470 chipset would be the better option? They're at least the only ones listed as suppoting the 2 Thunderbolt cards. Unfortunately they only have 6 SATA ports, so I'll have to get an HBA card sooner than expected. I am however finding it hard to find a motherboard of that kind, with the correct kind of PCIe configuration, they're all packed with PCIe 3.0 x1.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
I'm not familiar with Intel product lines, can't help you with that.
 

Nicolai

Member
Sep 4, 2020
37
2
8
I've found a Gigabyte X299 motherboard, which seems to have more than enough PCIe lanes for my needs, but it doesn't say anything about splitting up when cards are installed.
When I'm then looking at a Gigabyte X299X Designare 10G card, I can see that depends on the CPU installed. But it doesn't mention that for the Gigabyte X299-WUB, Gigabyte X299 UD4 EX, or the Gigabyte X299 UD4 Pro, only for when running Crossfire/SLI. I've looked in the datashees, but that hasn't made me any wiser. Does that mean that these motherboards have so many PCH PCIe lanes, that they're staying the same regardless of the CPU?
 

balnazzar

Active Member
Mar 6, 2019
221
30
28
How many lanes a motherboard will provide depends upon which processor you install. You should tell us about your use case. Then, we'll provide advice about cpu & mb.

Anyway, consider that:
1. If you look at "detailed specification", it will tell you how many lanes will be provided by which slot, depending upon the cpu.
2. You don't need 16 lanes to feed a GPU, no matter how powerful. 8 is more than sufficient.
3. If you still need more lanes, buy a X99/C612 motherboard (cheap) or a C422/C621 (more expensive) or a TRX40 (even more expensive). Every processor you can install with these chipsets will provide plenty of lanes. Particularly inexpensive are the E5v3/v4 on ebay, for X99/X612 (40 lanes).
 

Nicolai

Member
Sep 4, 2020
37
2
8
How many lanes a motherboard will provide depends upon which processor you install. You should tell us about your use case. Then, we'll provide advice about cpu & mb.

Anyway, consider that:
1. If you look at "detailed specification", it will tell you how many lanes will be provided by which slot, depending upon the cpu.
2. You don't need 16 lanes to feed a GPU, no matter how powerful. 8 is more than sufficient.
3. If you still need more lanes, buy a X99/C612 motherboard (cheap) or a C422/C621 (more expensive) or a TRX40 (even more expensive). Every processor you can install with these chipsets will provide plenty of lanes. Particularly inexpensive are the E5v3/v4 on ebay, for X99/X612 (40 lanes).
I'd like to make a server, that I can also use as my PC for both editing photos for work and playing games and browsing. I've seen a video from Linus Tech Tips, where he's using a Titan Ridge thunderbolt card to make a terminal for his own PC, I'd quite like to do something like that. I'm thinking something like this:

  1. I want to run Unraid to create a NAS so I have data security for my job. For my job I use a raw format (.CR2), so the files are fairly large for just being a photo, so eventually I'll need quite a lot of hard drives, I've already got 3 TB of photos, spread out on 3x1TB hard drives. They're affordable. I like the looks of the SilverStone Technologies SST-RM400, and I want to populate it with 2 of their SST-FS305B-12G hot-swap cages, because I know eventually a hard drive will fail, so I'd like to make it an easy process to change that drive to a new one. I'd also like my systems drive to be on a raid 1 NVMe SSDs, so in the event one of those fails, I don't have to reinstall my server. For the storage drives I will eventually need 10x SATA3 connectors, so in time I will need a PCIe slot for an HBA card, but if the motherboard have at least 6 SATA connectors, then I won't need to buy that, until I get the second hot-swap cage.
  2. I'd like to run a GPU capable of running games in the near future, with support for compiling 4k timelapse videos for my job. Then I'd like to pass that through a thunderbolt card to a terminal in my office. For this I'd like to run a Windows 10 Pro virtual machine.
  3. Finally I'd like an Intel GPU that's both power efficient and with integrated graphics I can use for hardware accelerating a Plex Media Server in case it will need to transcode something.
I didn't know a GPU didn't have to run at x16, I went of that amount of lanes, but a thunderbolt runs x4 and the LSI SAS 3008 HBA card runs 8x, so going off an 8x GPU I'd need 20 lanes in total. It's adviced using the PCH PCIe for a thunderbolt card, I don't know about the LSI SAS 3008 HBA.

Doing a bit of research and looking at motherboards, I've stumbled across an M.2 to U.2 adapter from Gigabyte and one from ASRock, would either one of those be able to handle a Mini SAS to SATA3 cable and split that out to 4 SATA3 drives in the same way an LSI SAS 3008 HBA card can?
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
I don't get these kinds of posts/questions. Seriously, I don't. (OP, my apologies, I'm not trying to be rude or anything).

Why do people want to take a simple thing and make it exponentially more complex?? A NAS requires very little horsepower.

Just do two systems. Don't overcomplicate things, when there's no need.

p.s. Oh, and Linus...well, let's leave it at that. That dude..
 

balnazzar

Active Member
Mar 6, 2019
221
30
28
Finally I'd like an Intel GPU that's both power efficient and with integrated graphics I can use for hardware accelerating a Plex Media Server in case it will need to transcode something.
You wanted to go with X299. No 2066 processor has integrated graphics. So, if IGP is a requirement, you should go with a comet lake. If you want lots of lanes and/or of slots, consider boards like the supermicro c9z490, with a PEX/PLX switch (x8/x8/x8/x8).
But be aware that for gaming even 4 lanes are sufficient, and that regular z490/w480 do have a typical layout like this: two mechanical x16 slots that run at x16/x0 or x8/x8 if they are both populated. And another slot that runs at x4, by the chipset. You will be ok with that.

May you link that video? I've got curious.
 

Nicolai

Member
Sep 4, 2020
37
2
8
I don't get these kinds of posts/questions. Seriously, I don't. (OP, my apologies, I'm not trying to be rude or anything).

Why do people want to take a simple thing and make it exponentially more complex?? A NAS requires very little horsepower.

Just do two systems. Don't overcomplicate things, when there's no need.

p.s. Oh, and Linus...well, let's leave it at that. That dude..
I'm aware that Linus might not be a popular guy at a forum like this, but for a guy like me, he does a fairly good job at explaining and showcasing items.

I have other constraints that affect my decision to attempt to run both systems in 1, such as I don't have space for a rackmount, instead I'll be removing a drawer from my TV furniture and replacing that with a server, then using low noise fans to keep the system as quiet as possible. Then there's the matter of costs, I might have my own business to run, but that doesn't mean I have a ton of jobs and my only reliable source of income is student subsidies, which isn't much. That also affects that while I might have to buy a little more expensive hardware, I will still be saving money compared to making 2 systems.

You wanted to go with X299. No 2066 processor has integrated graphics. So, if IGP is a requirement, you should go with a comet lake. If you want lots of lanes and/or of slots, consider boards like the supermicro c9z490, with a PEX/PLX switch (x8/x8/x8/x8).
But be aware that for gaming even 4 lanes are sufficient, and that regular z490/w480 do have a typical layout like this: two mechanical x16 slots that run at x16/x0 or x8/x8 if they are both populated. And another slot that runs at x4, by the chipset. You will be ok with that.

May you link that video? I've got curious.
Of course, it's a video named I may never upgrade again! where he attempts to match the playstation 5's SSD performance.

I found this Z490 VISION D from Gigabyte today, am I understanding it correctly, that with a DisplayPort In, it can output video to Thunderbolt 3, in the same way as an GC-Alpine Ridge or a GC-Titan Ridge card can? Because if that's the case, I will have saved the need for 4 PCIe lanes, reducing the need to only 16.

I do apologize for having to ask, but what is a PEX/PLX switch?
 
  • Like
Reactions: balnazzar

balnazzar

Active Member
Mar 6, 2019
221
30
28
I do apologize for having to ask, but what is a PEX/PLX switch?
It's a chip that double the available amount of pcie lanes. Typical of workstation-grade boards. Mind that it increases the overall power draw by 40/50W.

Unfortunately I know nothing about thunderbolt, particularly about video over thunderbolt. Better to ask in a forum a bit less "server-oriented". I don't think anyone here has ever tinkered with such stuff. But the idea of a thunderbolt terminal is actually interesting.
 

Nicolai

Member
Sep 4, 2020
37
2
8
It's a chip that double the available amount of pcie lanes. Typical of workstation-grade boards. Mind that it increases the overall power draw by 40/50W.

Unfortunately I know nothing about thunderbolt, particularly about video over thunderbolt. Better to ask in a forum a bit less "server-oriented". I don't think anyone here has ever tinkered with such stuff. But the idea of a thunderbolt terminal is actually interesting.
Despite what the forum here apparently think of Linus, the idea came from him. Where he got the idea from, I don't know, but it's something that would have potential in my situation. I should probably try and ask the Linus Tech Tips forum about that part then, seems like a reasonable place to ask a question like that.

Do you happen to know, if Xeon E-series is being phased out in favour of Xeon W-series? The W480 is supporting Xeon W-series, while the only chipset I can manage to find, that supports Xeon E-series is a C246, which appears to lack a lot for what I would need for this setup.
 
  • Like
Reactions: balnazzar

balnazzar

Active Member
Mar 6, 2019
221
30
28
Do you happen to know, if Xeon E-series is being phased out in favour of Xeon W-series? The W480 is supporting Xeon W-series, while the only chipset I can manage to find, that supports Xeon E-series is a C246, which appears to lack a lot for what I would need for this setup.
IMHO you should go for the Xeons E5-v3/v4 series (socket 2011-3),. They are crap cheap on ebay (like 50$ for a honest 8 core, or 120/130 for a capable 12 cores like the e5-2690v3). Don't worry, a processor either works or it doesn't. They provide 40 lanes.

If you want to go with new Xeons, try the W-12XX, the W-21XX, or the W-22XX. Other Xeons are not ideal for your use case.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
I'm aware that Linus might not be a popular guy at a forum like this, but for a guy like me, he does a fairly good job at explaining and showcasing items.

I have other constraints that affect my decision to attempt to run both systems in 1, such as I don't have space for a rackmount, instead I'll be removing a drawer from my TV furniture and replacing that with a server, then using low noise fans to keep the system as quiet as possible. Then there's the matter of costs, I might have my own business to run, but that doesn't mean I have a ton of jobs and my only reliable source of income is student subsidies, which isn't much. That also affects that while I might have to buy a little more expensive hardware, I will still be saving money compared to making 2 systems.



Of course, it's a video named I may never upgrade again! where he attempts to match the playstation 5's SSD performance.

I found this Z490 VISION D from Gigabyte today, am I understanding it correctly, that with a DisplayPort In, it can output video to Thunderbolt 3, in the same way as an GC-Alpine Ridge or a GC-Titan Ridge card can? Because if that's the case, I will have saved the need for 4 PCIe lanes, reducing the need to only 16.

I do apologize for having to ask, but what is a PEX/PLX switch?
I'm not trying to sway you one way or another. Simply giving an opinion.

While cost is always a concern, the simplicity, functionality, complexity, stability, maintainability are all equally important factors.

For.e.g a NAS doesn't need much horsepower, so if you use a low power system for it, you can leave it on 24x7. You combine the NAS with a gaming system, now a much higher power system has to be on 24x7 (assuming you want it to of course).
 
  • Like
Reactions: balnazzar

balnazzar

Active Member
Mar 6, 2019
221
30
28
Yes, a separate, low cost system to use as a nas would be a better idea. Much less hassle, too.
 

Nicolai

Member
Sep 4, 2020
37
2
8
I'm not trying to sway you one way or another. Simply giving an opinion.

While cost is always a concern, the simplicity, functionality, complexity, stability, maintainability are all equally important factors.

For.e.g a NAS doesn't need much horsepower, so if you use a low power system for it, you can leave it on 24x7. You combine the NAS with a gaming system, now a much higher power system has to be on 24x7 (assuming you want it to of course).
I'm open to suggestions. I have 2 drawers in my TV furniture, one is currently occupied by a large pile of cables, old remotes and batteries, while the other is where I store my movies. I can move the movies on to the NAS and then have another system in that drawer. One thing I'm not willing to compromise with, is having a PC in my office anymore if I do upgrade, so it would have to be terminaled through. I have costumers come in to my office and having the humm of a PC isn't ideal when sitting in a couch trying to plan some photos or a video.
 

Nicolai

Member
Sep 4, 2020
37
2
8
IMHO you should go for the Xeons E5-v3/v4 series (socket 2011-3),. They are crap cheap on ebay (like 50$ for a honest 8 core, or 120/130 for a capable 12 cores like the e5-2690v3). Don't worry, a processor either works or it doesn't. They provide 40 lanes.

If you want to go with new Xeons, try the W-12XX, the W-21XX, or the W-22XX. Other Xeons are not ideal for your use case.
I looked into the Xeon E5-v3/v4 series, unfortunately they're not going to be adequate for my needs. They're Haswell and Broadwell based, their Intel Quick Sync only support MPEG-2 and AVC encoding.

If I where to go with the 2 systems option, I'd still need Intel Quick Sync for any potential 1080 transcoding done by Plex Media Server, aswell as options for connecting an PCIe 3.0 8x LSI SAS controller with time. I'd prefer a SoC option for this, but I don't know if that's available.