ELI5 How dual x4 M.2 slots via PCH (DMI 3.0) work

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I have this board (well still waiting for it to ship) and I'm curious how the dual PCIe 3.0 x4 M.2 slots will work in practice being that they are connected to the PCH which has a single PCIe 3.0 x4 upstream connection. I assume I'm limited to only the bandwidth of that x4 upstream connection.

My question is, how will this work if practice is data is being transferred to both m.2 slots at the same time (such as two NVMe drives in a RAID0 or RAID1 mirror).
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I’m confused by this. According to the block diagram on page 18 of the manual the M.2 devices are each connected directly to they CPU PCIe slots, each on its own X4. It is the two x8 PCIe slots that are connected to the PCH (which is in turn connected to the CPU via an X4 link).

I think having the two PCIe slots connected this way might give me pause if running 10/25 gb nics. At least more then one of them.

BTW - do you know if this board will allow CPUs with on-chip GPU to work with quicksync (for Plex or Blue Iris)? The HDMI/DP is not exposed off the board as it uses the aspeed BMC for video.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I’m confused by this. According to the block diagram on page 18 of the manual the M.2 devices are each connected directly to they CPU PCIe slots, each on its own X4. It is the two x8 PCIe slots that are connected to the PCH (which is in turn connected to the CPU via an X4 link).

I think having the two PCIe slots connected this way might give me pause if running 10/25 gb nics. At least more then one of them.

BTW - do you know if this board will allow CPUs with on-chip GPU to work with quicksync (for Plex or Blue Iris)? The HDMI/DP is not exposed off the board as it uses the aspeed BMC for video.
I'm not seeing what you're seeing. I see the PCI slots going right into the CPU and the m.2 slots going to the chipset.



As for the iGPU and Plex I hope so. If not I'll be returning it. I cant find any confirmation as this board appears too new.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Ah, you are correct. I mis-read (or didn’t read) the labels on the boxes and mixed up the cpu and PCH.

Sorry for the confusion.
 

edge

Active Member
Apr 22, 2013
203
71
28
The c246 supports 24 pcie lanes. If you look at the block diagram, you will see that M.2-P_1 uses lanes 5 through 8 and M.2-P_2 uses lanes 9 thru 12. Each m.2 slot has it's own 4 dedicated pcie lanes. Where do you get the idea of "only a single pcie 3.0 or upstream"?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
The c246 supports 24 pcie lanes. If you look at the block diagram, you will see that M.2-P_1 uses lanes 5 through 8 and M.2-P_2 uses lanes 9 thru 12. Each m.2 slot has it's own 4 dedicated pcie lanes. Where do you get the idea of "only a single pcie 3.0 or upstream"?
This is where I got that idea.

 
  • Like
Reactions: vanfawx

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
My question is, how will this work if practice is data is being transferred to both m.2 slots at the same time (such as two NVMe drives in a RAID0 or RAID1 mirror).
Half* the bandwidth for ssd 1 and half of the bandwidth for ssd 2.

*this is just for illustration, there are a bunch of other devices connected to the chipset that transfer data from/to the cpu and decrease the throughput for the ssds.
 

edge

Active Member
Apr 22, 2013
203
71
28
That is 4x Direct Media Interface 3.0. A single DMI interface supports 3.93 GB/sec which is roughly 4 pcie lanes. The 4x DMI is roughly equivalent to a pcie 3.0 x16. Yes, there is over subscription of that interconnect considering the 24 pcie lanes plus the sata and usb ports. However, 2 pcie 3.0 x4 lanes can consume only around half the upstream link.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
My interpretation is that dmi3 equals 4x pcie 3.0...
I'm seriously confused right now :D
 

edge

Active Member
Apr 22, 2013
203
71
28
1 DMI 3 =~ 4x pcie 3.

There are 4x DMI 3.0 links between the c246 and the cpu, so think

4x (4x pcie 3). Does that help?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
1 DMI 3 =~ 4x pcie 3.

There are 4x DMI 3.0 links between the c246 and the cpu, so think

4x (4x pcie 3). Does that help?
Pretty sure this is incorrect. Everywhere I've read about DMI 3.0 it's says the total bandwidth is roughly 4GB/s.

DMI 3.0, released in August 2015, allows the 8 GT/s transfer rate per lane, for a total of four lanes and 3.93 GB/s for the CPU–PCH link.
 

edge

Active Member
Apr 22, 2013
203
71
28
I went back and re-read the specs. You are correct, the pch only supports 3.93GB/sec to CPU total (8GT).

That is pretty anemic. It will definitely bottleneck nvme reads. I hate to think of a couple of nvme drives with the sata lanes loaded with an ssd raid. My initial thoughts were based on it being inconceivable to me that anyone would design a chipset with such a high level of over subscription.

If you aren't using the cpu connected pcie slots, I would look to use them for the nvme drives.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I went back and re-read the specs. You are correct, the pch only supports 3.93GB/sec to CPU total (8GT).

That is pretty anemic. It will definitely bottleneck nvme reads. I hate to think of a couple of nvme drives with the sata lanes loaded with an ssd raid. My initial thoughts were based on it being inconceivable to me that anyone would design a chipset with such a high level of over subscription.

If you aren't using the cpu connected pcie slots, I would look to use them for the nvme drives.
Unfortunately I am. And unfortunately the DMI 3.0 limitation is not limited to this chipset, it's the case with most Intel chipsets.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I may just need to rethink using an HBA in this build and instead putting the HBA with all my disks into my other server. Not an issue really since my HBA is connected to external DAS units. I just really was hoping to not have all my bulk storage be non-local to Plex as that causes Plex to not pickup changes automatically.