CWWK/Topton/... Nxxx quad NIC router

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pigr8

Member
Jul 13, 2017
91
94
18
on what board are you having so many issues with the a+e slot? on my p5 v3 i never had any of those issues, and i have the nvme disk in a mdadm array and that would be really bad if any of them would suddently drop out.
 

rogervn

New Member
Sep 22, 2024
11
1
3
on what board are you having so many issues with the a+e slot? on my p5 v3 i never had any of those issues, and i have the nvme disk in a mdadm array and that would be really bad if any of them would suddently drop out.
How do I tell the version apart? On bios it shows CW-ADLNTBX-1C2L
 

oneplane

Well-Known Member
Jul 23, 2021
868
527
93

Drabon

New Member
May 25, 2024
8
3
3
And how does that link apply to the current problem? Ignoring the 2 year age of the posting, they are talking about a pci-e slot to nvme splitter adapter of a platform with a N5105 and they even mention that the number and kind of the drives is important. So I assume that this issue is not about clocks, else the drive would work if it is the only one being connected.
 

son1s

New Member
Aug 30, 2024
3
0
1
1 mini-PCIe (I think it is 1x lane only and I plan to convert it into a regular M.2 NVMe slot with an adapter.) After fitting the adapter it is possible to fit in a 2280 size NVMe drive but it is a tight fit against the back of the RJ45 block. I plan to use a 2280 NVMe drive that has no components in the top section that may get damaged from any bending / flexing
Currently deciding on this particular unit. Want to use it as proxmox host with openwrt. Curious about mini pcie slot, I want to put wallys mt7915 in it. Does the slot have pcie lines and how many of them?
Would that be enough for that particular module? (appreciate if you check)

Also I want to know your general experience with this mini pc. Is the fan loud? Any problems with overheating? Anything else to share maybe?)
 

pigr8

Member
Jul 13, 2017
91
94
18
I didn't see a sticker when I opened it up. My bios seems slightly different as shows F2 and 06/26/2024 (damn American format).
it's probably different than mine then, i have an older bios date but with the F3 version.. here are all the bios version to download.

in the chipset tab in the bios can you see all the indipentend lanes of the expansion slots right?
 

oneplane

Well-Known Member
Jul 23, 2021
868
527
93
And how does that link apply to the current problem? Ignoring the 2 year age of the posting, they are talking about a pci-e slot to nvme splitter adapter of a platform with a N5105 and they even mention that the number and kind of the drives is important. So I assume that this issue is not about clocks, else the drive would work if it is the only one being connected.
Roger wrote he has never heard about it, this is a topic to learn about it.

As for the current problem, this is a design thing, not a 'this special board' thing. If there are not enough clock sources but there are devices that require clock sources, then you get race conditions after the source resets (like when you [re]boot a system). Just because some PCIe lanes are present and a physical connector physically fits doesn't mean all the other signals will happily work.

Granted, it might not be the case here, and maybe it's just a voltage dip causing a brownout, maybe the signal integrity is lacking and training the links doesn't work in every state (i.e. warm boots), or maybe it's something else. But Roger said he didn't know about there being a need for a non-shared clock, so there you go. It is often something that happens when you have a platform that doesn't have a design for many NVMe Drives and assumes one or more lanes are always going to be used for WLAN or WWAN.
 

Drabon

New Member
May 25, 2024
8
3
3
Fair enough. Do you have any link for the amount of clock sources for eg. the N100/N305 or do they depend on the motherboard, too? Tbh I looked for both CPU and SSDs regarding that topic and found basically nothing aside from this Forum.
 

oneplane

Well-Known Member
Jul 23, 2021
868
527
93
Fair enough. Do you have any link for the amount of clock sources for eg. the N100/N305 or do they depend on the motherboard, too? Tbh I looked for both CPU and SSDs regarding that topic and found basically nothing aside from this Forum.
It is usually whatever the platform reference design has. With Intel, the issue is that they really want to segment the market so for the mobile and ULV SKUs they tend to have a CPU/Chipset or SoC design that has some really annoying limits (like the RAM channels, PCIe root complex options, lanes etc.).

There are some block diagrams with enough details, but usually a sign of trouble is having more slots than is 'normal' on a board. In some cases there will be a PCH doing some PCIe switching, or a dedicated PCIe switch, and in those scenarios the whole bus and endpoint configuration (including sideband and clock setup) is going to depend on those chips rather than the CPU.

You'll also sometimes (not on small SBCs) see chips like 9FGL6251 - Intelligent PCIe Clock Buffer/Generator for nVME with configurations like Separate Reference Clock on PCIe NVMe SSD - Forum - Timing Products - Renesas Engineering Community or there will be some exotic things like https://www.sanblaze.com/_files/ugd/9865d6_3b9bcfb6c6b34885a41d31a8380ce15b.pdf

Either way, some easy way to see what the platform "should" do is just in the Intel documentation.

Edit: like this one for Alder Lake: PCI Express* Port Support Feature Details - 001 - ID:759603 | Intel® Processor and Intel® Core™ i3 N-Series (for some reason Google has a German version but not an English one?) you can see that the total number of lanes doesn't mean the root complex can actually handle that many individual lanes. Note number 4 is especially important, they note their PCIe design doesn't go beyond 1 storage device. But their wording shows that they didn't actively block it, they just didn't design or test for it. If there is some resource exhaustion (i.e. no more clock configurable) you might get some sort of race condition.

PCIe note 9 on that page informs us a bit more about this:

The PCH PCIe* Root Ports can be configured to map to any of the SRCCLKREQ# PCIe* clock request signals and the CLKOUT_PCIE_P/N PCIe* differential clock signal pairs.

Say you have 2 storage devices and only one of them needs some extra stuff from the root port. If the first drive to get init needs it (and gets it), the second one will also work. But if the second one gets init first, takes up the resource (i.e. the clock) then the second one will never init since it also needed that clock but the root complex already gave it away. This isn't exactly how it works, but it is the simplest way I can explain it at the moment.

If you have 9 lanes but only 4 root ports (or something like that) you can split them up as much as you like, but your root port isn't going to be very happy about 2 devices on the same port wanting the same clock to do different things.

On the other hand, the Chinese designs tend to expose all interfaces so you can make that choice yourself, with the downside that as a user you're now exposed to Intel x86 you usually don't have to think about (boards from MiTAC, Asus, Gigabyte etc. simply don't give you ports that do such less-than-obvious things). But with the introduction of mSATA slots, some boards have had notes printed on them where using the mSATA slot means one of the classic SATA ports would be disabled. What is great its that with the non-standard configurations you can do interesting things like have 4 network controllers, 2 x1 M.2 slots, some SATA and some USB. It does however mean that suddenly not all ports that should behave the same will actually behave the same. But it's very functional and very cheap otherwise. And with NVMe SSDs getting most of the things they need on-board anyway (not entirely sure how that would work, perhaps PCIe clock recovery? IDK) this means you can get away with some pretty crazy setups.

Anyway, just because this is usually the sort of thing that causes weird stuff to happen on SBCs doesn't mean it is always the reason. But it is more likely than with a full price retail board you'd find for twice the price.
 
Last edited:
  • Like
Reactions: KevinR

pigr8

Member
Jul 13, 2017
91
94
18
fyi when i rewired my a+e key into a m adapter i didnt even solder the clkreq pin, and i adapted that slot to an nvme disk but also to a i340-t4 pcie card, always had success and stability.

probably depends on the hardware connected? i dont know, but i know that for "simple" minimal connection repin what is needed to be operational are: the transmitter pair (positive and negative) and the receiver pair (positive and negative) for a single lane - multiplied for each lane used but in the a+e the only one available, clock reference pair (positive and negative) that are needed to sync, wake and reset pins, and obviously gnd and 3v3.

as said i never wired the clkreq nor the susclk, and so the other specific pins that are used for wifi usb or smbus.

on the converted adapter i used a samsung pcie3 nvme, a kingston pcie3 nvme, a lexar pcie4 nvme and those all worked just fine, so did the same drives connected simultaneously on the other lanes.

my guess is that in a syncronous clock enviroment there are no issues between the end devices and the cpu and there is no such thing as "clockless", different story could be in a asynchronous design but i think not in case of a nvme disk.

Edit: like this one for Alder Lake: PCI Express* Port Support Feature Details - 001 - ID:759603 | Intel® Processor and Intel® Core™ i3 N-Series (for some reason Google has a German version but not an English one?) you can see that the total number of lanes doesn't mean the root complex can actually handle that many individual lanes. Note number 4 is especially important, they note their PCIe design doesn't go beyond 1 storage device. But their wording shows that they didn't actively block it, they just didn't design or test for it. If there is some resource exhaustion (i.e. no more clock configurable) you might get some sort of race condition.

PCIe note 9 on that page informs us a bit more about this:

The PCH PCIe* Root Ports can be configured to map to any of the SRCCLKREQ# PCIe* clock request signals and the CLKOUT_PCIE_P/N PCIe* differential clock signal pairs.
for point 9 probably indicates the asynchronously configuration of the roots.. in case of the cwwk board:

- the m.2 m key used for the 4x disk hat uses lane 1-2-3-4 so = rp1 (1 device)
- lane 9 and 10 are for the two networks so = rp9 (2 devices)
- lane 7 is the m.2 a+e key on rp7 (1 device)
- lanes 11 and 12 are configured for sata/usb on rp11 and rp12 (1 combo device)

point 4 on that document is related to a single x4 lanes disk and probably intel means that splitting is not validated (thus 9 lanes but only max 5 ports - no bifurcation).. since the bios update on my v3 rp1 can be splitted into 4 devices (that is what intel is not validating), but even splitting rp1 to rp1/2/3/4 it should not interfere with rp7 and his clkreq or ref.
 
  • Like
Reactions: oneplane

oneplane

Well-Known Member
Jul 23, 2021
868
527
93
for point 9 probably indicates the asynchronously configuration of the roots.. in case of the cwwk board:

- the m.2 m key used for the 4x disk hat uses lane 1-2-3-4 so = rp1 (1 device)
- lane 9 and 10 are for the two networks so = rp9 (2 devices)
- lane 7 is the m.2 a+e key on rp7 (1 device)
- lanes 11 and 12 are configured for sata/usb on rp11 and rp12 (1 combo device)

point 4 on that document is related to a single x4 lanes disk and probably intel means that splitting is not validated (thus 9 lanes but only max 5 ports - no bifurcation).. since the bios update on my v3 rp1 can be splitted into 4 devices (that is what intel is not validating), but even splitting rp1 to rp1/2/3/4 it should not interfere with rp7 and his clkreq or ref.
Yep, the configuration should technically work, especially if rp7 is not supposed to be hosting more than 1 device any way. The main issue was (as far as I read) that some SSDs had PCIe IP in the controller that wouldn't train unless the optional pins were connected and configured by the firmware on the host. Wouldn't be surprised if the older boards used a different layout (with even more unvalidated configurations) where clkreq is shared between two slots and the SSDs both try to use it. Technically, that might even be fixed by just disconnecting that pin from one of the slots as I don't expect modern PCIe blocks to be so dependant on it.

In the older thread there were some lists of SSD combinations that didn't work, so perhaps the controllers or SSD firmware might be informative as to which ones misbehave when only the minimal configuration is available.

I do know that this tends to be less problematic with I/O devices like HBAs and NICs, not sure if that is because their PCIe device implementations are better or if this is specifically something in older NVMe reference designs. But like the post I referenced, my most up-to-date information is getting about 1 to 2 years old by now.

Technically, it shouldn't even matter since the roots are preconfigured before the devices are reset IIRC. But in previous cases, it did.

Edit: come to think of it, has anyone found a block diagram, port map or even schematic for any of these boards? Reverse engineering or discovering what is wired up where is fun etc but there are so many boards it becomes rather intense.
 
Last edited:

Drabon

New Member
May 25, 2024
8
3
3
Edit: come to think of it, has anyone found a block diagram, port map or even schematic for any of these boards? Reverse engineering or discovering what is wired up where is fun etc but there are so many boards it becomes rather intense.
block_diagram_x86P5.png




that is from the x86 P5 manual (google translated)
 
  • Like
Reactions: oneplane

oneplane

Well-Known Member
Jul 23, 2021
868
527
93
View attachment 39847




that is from the x86 P5 manual (google translated)
That's the one where they also have the "Incoming call self starting jump cap" (auto power on jumper) isn't it :D But it looks like their designation as a WIFI-only slot means they might have done something on the board where the routing or placement isn't great for large/fast transfers. WiFi is of course much slower than an SSD. But the fact that it doesn't train at all is pretty odd (which is something it would have to do regardless of the device). You don't happen to have an older SSD lying around with on-board status LEDs?

The fact that intel won't validate it and the product information from Topton suggests it's a WiFi slot (as if the PCI-SIG would come up with such a term) means that either they did that on purpose, or they messed up the design or firmware and didn't bother fixing it.

I wish they had something with a bit more detail, but the NDA with Intel usually prevents that.
 

Drabon

New Member
May 25, 2024
8
3
3
That's the one where they also have the "Incoming call self starting jump cap" (auto power on jumper) isn't it :D But it looks like their designation as
Full English manual x86 P5

But the fact that it doesn't train at all is pretty odd (which is something it would have to do regardless of the device). You don't happen to have an older SSD lying around with on-board status LEDs?
I tried the NVMe from CWWK, which has a LED. It does light up, and sometimes gets posted in BIOS, but then crashes after some time. I will test the slot again, once the redesigned case is lasered. The 26062024 BIOS might be worth a shot too, I am using 1505 atm.
 

TrevorH

New Member
Oct 25, 2024
5
1
3
N100's only have 8 PCIe lanes so those need to be divided up between 4 or 6 x 2.5GbE ports which all use a lane each. On my 6 port topton that only leaves a single lane each for the 2 M.2 slots and they perform like that is true. Given the waste of using a pcie 3.0 x4 SSD in an x1 slot I just used an old SATA SSD drive I had lying around and get roughly 60% of the performance of an M.2 drive in a 1 lane slot. I have better uses for the speed of the other 3 lanes on mny M.2 :-D
 

rogervn

New Member
Sep 22, 2024
11
1
3
BTW, I've had to open it up and I've found the sticker and my version seems to be V3:


The problem is that I've had to open because suddenly the disks were all gone from the OS except the disk on the slot 1 in the daughter board. I've checked in bios and all disks were there, but 2 of them were failing the self check.

I've opened it up and reseated everything and it was back, but one of the disks were plagued by I/O errors and was removed from the zfs pool. Then I've restarted and it was back.

It does seem that there's some flakiness on either the contacts or the boot process that could have something to do with what oneplane was talking about on clocks.

I'm using a KIOXIA 128GB nvme for boot and 3x WD BLACK 770 for data. It works most of the time, but I'm feeling wary of these errors on reboots.