Topton Jasper Lake Quad i225V Mini PC Report

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stovar

Active Member
Dec 27, 2022
174
74
28
I have n5105 with OpenWrt and I get 1Gb/s even with VPN via OpenVPN. So cool.
Hi was this using a VPN provider like NordVPN etc?

I am curious if anyone has been able to reach faster Wireguard or Openvpn speeds of 1Gb/s with VPN providers like Nord or ExpressVPN, although its probably limited by VPN servers and WGvsOpenvpn.
 

Kurátor

New Member
Dec 21, 2022
6
6
3
Hi was this using a VPN provider like NordVPN etc?

I am curious if anyone has been able to reach faster Wireguard or Openvpn speeds of 1Gb/s with VPN providers like Nord or ExpressVPN, although its probably limited by VPN servers and WGvsOpenvpn.
My ISP is only 1Gb/s so I can't try more. But Yes. I use NordVPN on OpenVPN and they get almost 1Gb/s. My ordinary router could not even give 100Mbit via OpenVPN.
 
  • Like
Reactions: Stovar

tusk9541

Member
Nov 23, 2022
59
72
18
Hey everyone, thanks for all the info here, especially those who've posted pics of the cooling parts, very helpful.

I finally got to order one of the N5015 boxes w/ 4xi226 NICs, after some shenanigans with "Kingnovy Computer Store" in which they gave me the "wrong" DHL tracking number which delivered to a completely different city hundreds of miles from mine. I've read since that some AliExpress sellers give you fake tracking numbers sometimes to meet shipping deadlines. They took weeks to even start the shipping process and when I messaged them after the deadline for them to deliver, they said DHL had returned my package which I doubt since it was delivered per DHL's own site. They said they'd ship again, and started shipping process with another shipper, though I'd paid extra for DHL. I just filed a dispute and got a refund without any resistance from them, luckily. They knew what they did.

So I got this one from "CWWK PC Store" which I guess is one of the CWWK stores, but relatively new Oct 2022. They seem to be on the up and up though, promptly started shipping process the next day, and I'm hoping to receive it soon. Tracks with other posts I've seen here where CWWK delivered earlier than expected.

As you notice I got the "affordable edition" which is quite a bit cheaper but has a less capable heatsink case. Thanks to the pics posted and on that page, I'm confident I can mod the case to fit a heatpipe heatsink and still run it fanless. I've got some ideas but I can't be sure until I get the thing. I'm going to use it as a router/firewall with Opnsense, with a Wireguard server and SSH tunnel and probably nothing else that would require much CPU power to manage my home network.
 

Batmanzi

New Member
Dec 24, 2022
4
0
1
Did anyone figure out how to control the 3pin fan speed on the J6413 i226-V box? I've updated to the latest firmware and findings are:

- From BIOS: I don't see controls that can be used to control the speed
- From Windows: this been working great, any modern fan control software can access the fan readings and control the speed too
- From Linux (Proxmox and Ubuntu): using the command "sensors detect" shows the below error:

Note: there is no driver for ITE IT8613E Super IO Sensors yet.
Check device_support_status [HWMon Wiki] for updates.


Since I'm planing on running this box with Proxmox, was hopping for anyone to point me to a working solution if possible, I'd like to avoid using external fans with USB connections if possible.

Thanks,
 

Stovar

Active Member
Dec 27, 2022
174
74
28
My ISP is only 1Gb/s so I can't try more. But Yes. I use NordVPN on OpenVPN and they get almost 1Gb/s. My ordinary router could not even give 100Mbit via OpenVPN.
Thanks, that is good news and very impressive nordvpn servers get such good speeds too.
 

Misery

New Member
Jul 28, 2022
4
1
3
Do you have C-states enabled in BIOS and "Use PowerD" enabled in OPNsense? If so, try disabling both of those. If I have C-States and PowerD both set to enabled, I can replicate failures in pfsense and opnsense (on N5105 and N6005). If both of those are off, the emulation failures go away. If you look in dmesg or the system logs, you should see some indicator of the problem, although the problem itself isn't clear.
I have crashes, too. Could you paste your VM XML? Do you use q35 chipset and PCI passthrough?
Did it crash again someday after you disabled the C-States? I already disabled PowerD but it still crashes.
 

DomFel

Member
Sep 5, 2022
77
74
18
Has anybody got issues with RAM in Dual Channel mode?

If I use 16gb in one slot it works fine, if I use both of them the system hangs few minutes past the BIOS. If I stays in the BIOS system works fine and reports 3000Mhz (RAM is rated 3200).

This is on a V5 6x i226 with N6005.
 

EasyRhino

Well-Known Member
Aug 6, 2019
514
394
63
ah, I found the c-state disable entirely. for some reason I glossed over that bios setting and was messing with individual c-states.

with c-stats maxed out proxmox was idling as low as 8.5W. When limited to just C1 it was idling around 10W. With cstates completely disable it idles around 12W. (P-states are still enabled).

will be interesting to see how it does for a few days.

Years ago I briefly experimented with nesting opnsense in proxmox for a different hardware I was using for a router. However, I decided it was too complicated and just went bare metal. I may do it again, but sometimes I like flying too close to the sun.
Well, operating with disabled c-states had been more reliable. I was able to operate for 10+ days without hanging. But just yesterday the router had an explained hang on it. I may go to opnsense bare metal now.
 
  • Like
Reactions: Stovar

skimikes

Member
Jun 27, 2022
83
79
18
I have crashes, too. Could you paste your VM XML? Do you use q35 chipset and PCI passthrough?
Did it crash again someday after you disabled the C-States? I already disabled PowerD but it still crashes.
After setting C-states to Disabled and disabling PowerD, I no longer had crashes. Turn them back on, force the system to ramp up and down a few times and the crashes resumed. Turn them back off, and the crashes went away. pfsense or opnsense on bare metal with C-states Enabled and PowerD enabled, no issues at all.

Using q35 with pcie passthrough. XML:

Code:
balloon: 0
boot: order=virtio0
cores: 4
cpu: host
hostpci0: 0000:06:00,pcie=1
hostpci1: 0000:05:00,pcie=1
hotplug: 0
machine: q35
memory: 12288
meta: creation-qemu=7.1.0,ctime=1670869059
name: pfsense
numa: 1
onboot: 1
ostype: other
scsihw: virtio-scsi-single
smbios1: uuid=ed2fe4a7-c2bd-4561-a050-6eeac6eee248
sockets: 1
startup: up=15
tablet: 0
vga: qxl
virtio0: local-zfs:vm-100-disk-0,iothread=1,size=32G
vmgenid: dec19246-e310-4661-93a0-141808ffeca9
pfguest_options.jpg

pf_hardware2.jpg

Are you using BIOS or UEFI? There used to be issues with UEFI. I don't know if that is still the case.

The crashes with C-states/PowerD enabled looked like this in syslog:

Code:
May 17 20:20:01 proxmox QEMU[294219]: KVM internal error. Suberror: 1
May 17 20:20:01 proxmox QEMU[294219]: emulation failure
May 17 20:20:01 proxmox QEMU[294219]: EAX=000f5128 EBX=000f5128 ECX=00000fc8 EDX=00000049
May 17 20:20:01 proxmox QEMU[294219]: ESI=00000000 EDI=000f3d8e EBP=00000fc8 ESP=00000fa0
May 17 20:20:01 proxmox QEMU[294219]: EIP=8e20070b EFL=00010006 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
May 17 20:20:01 proxmox QEMU[294219]: ES =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
May 17 20:20:01 proxmox QEMU[294219]: CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
May 17 20:20:01 proxmox QEMU[294219]: SS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
May 17 20:20:01 proxmox QEMU[294219]: DS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
May 17 20:20:01 proxmox QEMU[294219]: FS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
May 17 20:20:01 proxmox QEMU[294219]: GS =0010 00000000 ffffffff 00c09300 DPL=0 DS   [-WA]
May 17 20:20:01 proxmox QEMU[294219]: LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
May 17 20:20:01 proxmox QEMU[294219]: TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy
May 17 20:20:01 proxmox QEMU[294219]: GDT=     000f6180 00000037
May 17 20:20:01 proxmox QEMU[294219]: IDT=     000f61be 00000000
May 17 20:20:01 proxmox QEMU[294219]: CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000
May 17 20:20:01 proxmox QEMU[294219]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
May 17 20:20:01 proxmox QEMU[294219]: DR6=00000000ffff0ff0 DR7=0000000000000400
May 17 20:20:01 proxmox QEMU[294219]: EFER=0000000000000000
May 17 20:20:01 proxmox QEMU[294219]: Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
 

BobS

Member
Dec 2, 2022
34
26
18
Has anybody got issues with RAM in Dual Channel mode?

If I use 16gb in one slot it works fine, if I use both of them the system hangs few minutes past the BIOS. If I stays in the BIOS system works fine and reports 3000Mhz (RAM is rated 3200).

This is on a V5 6x i226 with N6005.
Have the same specs as you state. Mine is a HUSN model RJ03L and I have 2 x 8GB sticks of Crucial SODIMM's CT2K8G4SFRA32A that have been stressed tested with MemTest86 as well with Prime95. I have installed and tested this box using:

1. Win11 Pro
2. Win10 Pro
3. pfsense - bare metal
4. OPNSense - bare metal
5. Proxmox with two versions of pfsense in VM's

Have not had a single problem except for some higher than expected temps that I fixed and posted about:

What brand/model of memory are you using? The maker of the motherboard (CW) warns about not using name-brand memory but fail to qualify any particular vendors. Are both sticks from the same vendor and are the same model? The problems you're seeing have been commented on by others in this thread but I can't give you a direct reference.

I tested the above configurations using default BIOS settings except when I switched testing from using a NVMe SSD to a SATA 2.5" SSD and changed to ACHI. There are a ton of BIOS settings but it should run with defaults if it's the same as mine. If not then look to trying some other memory.

BobS
 
  • Like
Reactions: DomFel

skimikes

Member
Jun 27, 2022
83
79
18
Has anybody got issues with RAM in Dual Channel mode?

If I use 16gb in one slot it works fine, if I use both of them the system hangs few minutes past the BIOS. If I stays in the BIOS system works fine and reports 3000Mhz (RAM is rated 3200).

This is on a V5 6x i226 with N6005.
Do you have 1x 16GB module and 2x 8GB modules that you are testing with, or are you testing with only 16GB modules? The N6005 is only rated for 16GB max (see the Intel Ark page for N6005 assuming you trust Intel over the Chinese ODMs/resellers). Anything beyond that is not guaranteed to work and will depend on how you are using the system. The general compute portion of the processor can run with >16GB but if the iGPU attempts to touch memory outside of 16GB, the system may crash. If using >16GB, try configuring the system with only 16GB installed, then install the 2nd module, make sure it boots, then shut it down, disconnect the display, then boot your system and only ever access it over the network. That was the only way I was able to get N5105/N6005 systems to run "stable" with >16GB memory configurations.

I know they sell them with 32GB configurations. I'm not sure how stable they are. I've tried Samsung, Mushkin, Crucial, TeamGroup, and Gskill DIMMs and any time there was >16GB of RAM, doing anything video related caused undefined behavior such as screen corruption and system reset. The easiest way to test this was to just install Win10 and grab the iGPU drivers off Intel's website and attempt to install them. As soon as the drivers initialize, the system typically resets itself. I'd love to hear from someone that actually ordered a system with 32GB from aliexpress.
 
Last edited:

DomFel

Member
Sep 5, 2022
77
74
18
What brand/model of memory are you using?
It's Crucial, as the nvme,.

Do you have 1x 16GB module and 2x 8GB modules that you are testing with, or are you testing with only 16GB modules? The N6005 is only rated for 16GB max
You are spot on! I was still installing proxmox when the screen got corrupted and rebooted. in fact the memory is not running at 3200 but at 2933mhz, exactly as per Intel specs.
Well I guess that's my answer then.
By the way even without display it doesn't work.

I will try to use ddr4 rated at 2666mhz, will give an update, who knows!
 

Misery

New Member
Jul 28, 2022
4
1
3
Are you using BIOS or UEFI? There used to be issues with UEFI. I don't know if that is still the case.
Thanks! Seems the VM config isn't the problem. :-(
I used BIOS. I found this now: VM freezes irregularly
I updated microcode again and updated to Linux 6.1.1. Hopefully it will help. Otherwise I need to disable C-States and try that.
 

skimikes

Member
Jun 27, 2022
83
79
18
  • Like
Reactions: rfox

s0x

New Member
Jul 8, 2022
1
0
1
Hello, I've posted the info below on the Proxmox forum (check spoiler), thought it might be of usefulness here as well.

On my end, still on kernel 5.19.17-1-pve, with 32 days uptime, two VMs, OPNsense ( 3 (1 sockets, 3 cores) [host,flags=-pcid;-spec-ctrl;-ssbd;+aes] [cpuunits=2048] ) with VirtIO NICs and HomeAssistant, and two LXC containers (PiHole and TP-Link Omada Controller, based on Ubuntu 22.04).

root@pve:~# last reboot | head -n 1
reboot system boot 5.19.17-1-pve Sat Nov 26 20:14 still running
root@pve:~# uptime
11:48:25 up 32 days, 15:33, 1 user, load average: 0.39, 0.32, 0.29
root@pve:~# uname -a
Linux pve 5.19.17-1-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.17-1 (Mon, 14 Nov 2022 20:25:12 x86_64 GNU/Linux

Host is a Topton N5105 (CW-6000) with i225 B3 NICs, BIOS date 29/09/2022, 2x8GB RAM, 1x NVMe SSD WD SN530. Extra Noctua 40mm fan 12v (NF-A4x10 PWM) as exhaust is inaudible (as intake the noise would be noticeable).

But I've applied several options to the kernel cmdline, see below.

Kernel cmdline options:

intel_idle.max_cstate=1 (disable C-states below 1 (such as C3))
intel_iommu=on iommu=pt (Enable iommu, since at the begining I was going to use passthrough NICs to the OPNsense VM, but ended up using Virtio NICs, while testing for the crashes, and kept them)
mitigations=off (Self explanatory)
i915.enable_guc=2 ( Enable low-power H264 encoding, Firmware , Hardware Acceleration | Jellyfin )
initcall_blacklist=sysfb_init ( GPU passthrough , Proxmox GPU Passthrough | Wiki )
nvme_core.default_ps_max_latency_us=14900 ( WD NVME SSD Freezing on Linux )

Also, due to i2c-6 NAK errors ( [Sat Nov 26 20:14:37 2022] i2c i2c-6: sendbytes: NAK bailout. ) related to the iGPU I've connected a dummy HDMI dongle after confirming that with a monitor plugged in the errors stoppped and so did system crashes, but by then I've had already applied other kernel parameters.

Didn't test if those were related to the enabling of i915 GuC/HuC or not.

And due to errors related to the NVMe SSD (WD SN530 M.2 2242) I've applied the nvme_core.default_ps_max_latency_us parameter as well.

[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: device [15b7:5008] error status/mask=00000001/0000e000
[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: [ 0] RxErr

- edit -
Also updated the intel microcode:

root@pve:~# dmesg -T | grep microcode
[Sat Nov 26 20:14:32 2022] microcode: microcode updated early to revision 0x24000023, date = 2022-02-19
[Sat Nov 26 20:14:32 2022] SRBDS: Vulnerable: No microcode
[Sat Nov 26 20:14:33 2022] microcode: sig=0x906c0, pf=0x1, revision=0x24000023
[Sat Nov 26 20:14:33 2022] microcode: Microcode Update Driver: v2.2.

- edit -
@ VM freezes irregularly
 

rfox

Member
Jun 10, 2022
80
41
18
Germany
I suspect a lot more people would be considering such a thing had the memory not been soldered. It's really a neatly designed little unit. I just wish they'd left space for a SODIMM or two.
I agree - there are many "trade-offs" for this device - This is my first foray into SFP+ devices and been doing much research, yet it's extremely difficult to find out what SFP/SFP+ modules will work or not work with this device - I guess it's trial and error . . . Documentation is little to none and I am also concerned about the emmc memory (long term reliability as a boot device) - I guess, no pain - no gain . . . :p
 

EasyRhino

Well-Known Member
Aug 6, 2019
514
394
63
Hello, I've posted the info below on the Proxmox forum (check spoiler), thought it might be of usefulness here as well.

On my end, still on kernel 5.19.17-1-pve, with 32 days uptime, two VMs, OPNsense ( 3 (1 sockets, 3 cores) [host,flags=-pcid;-spec-ctrl;-ssbd;+aes] [cpuunits=2048] ) with VirtIO NICs and HomeAssistant, and two LXC containers (PiHole and TP-Link Omada Controller, based on Ubuntu 22.04).

root@pve:~# last reboot | head -n 1
reboot system boot 5.19.17-1-pve Sat Nov 26 20:14 still running
root@pve:~# uptime
11:48:25 up 32 days, 15:33, 1 user, load average: 0.39, 0.32, 0.29
root@pve:~# uname -a
Linux pve 5.19.17-1-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.17-1 (Mon, 14 Nov 2022 20:25:12 x86_64 GNU/Linux

Host is a Topton N5105 (CW-6000) with i225 B3 NICs, BIOS date 29/09/2022, 2x8GB RAM, 1x NVMe SSD WD SN530. Extra Noctua 40mm fan 12v (NF-A4x10 PWM) as exhaust is inaudible (as intake the noise would be noticeable).

But I've applied several options to the kernel cmdline, see below.

Kernel cmdline options:

intel_idle.max_cstate=1 (disable C-states below 1 (such as C3))
intel_iommu=on iommu=pt (Enable iommu, since at the begining I was going to use passthrough NICs to the OPNsense VM, but ended up using Virtio NICs, while testing for the crashes, and kept them)
mitigations=off (Self explanatory)
i915.enable_guc=2 ( Enable low-power H264 encoding, Firmware , Hardware Acceleration | Jellyfin )
initcall_blacklist=sysfb_init ( GPU passthrough , Proxmox GPU Passthrough | Wiki )
nvme_core.default_ps_max_latency_us=14900 ( WD NVME SSD Freezing on Linux )

Also, due to i2c-6 NAK errors ( [Sat Nov 26 20:14:37 2022] i2c i2c-6: sendbytes: NAK bailout. ) related to the iGPU I've connected a dummy HDMI dongle after confirming that with a monitor plugged in the errors stoppped and so did system crashes, but by then I've had already applied other kernel parameters.

Didn't test if those were related to the enabling of i915 GuC/HuC or not.

And due to errors related to the NVMe SSD (WD SN530 M.2 2242) I've applied the nvme_core.default_ps_max_latency_us parameter as well.

[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: device [15b7:5008] error status/mask=00000001/0000e000
[Tue Nov 29 11:46:52 2022] nvme 0000:01:00.0: [ 0] RxErr

- edit -
Also updated the intel microcode:

root@pve:~# dmesg -T | grep microcode
[Sat Nov 26 20:14:32 2022] microcode: microcode updated early to revision 0x24000023, date = 2022-02-19
[Sat Nov 26 20:14:32 2022] SRBDS: Vulnerable: No microcode
[Sat Nov 26 20:14:33 2022] microcode: sig=0x906c0, pf=0x1, revision=0x24000023
[Sat Nov 26 20:14:33 2022] microcode: Microcode Update Driver: v2.2.

- edit -
@ VM freezes irregularly
yep, thanks for directing to that thread. I've updated proxmox to the latest 6.1 kernel and also installed the intel-microcode update.
I already have c states disabled
if it's still flakey I can just give up on proxmox and go opnsense bare metal.