Topton Jasper Lake Quad i225V Mini PC Report

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dazagrt

Active Member
Mar 1, 2021
195
97
28
During their trundling around the webz I don't suppose anyone has seen an actual 4 port Intel 2.5Gb PCIe card, have they? I've seen Realtek 4 x 2.5Gb ones but not Intel.
 

prdtabim

Active Member
Jan 29, 2022
170
66
28
1G lite is a mode implemented by Realtek so that Gigabit NICs can connect at 500Mb/s using just 2 pairs in cat5 or newer.
Code:
7.12. Giga Lite (500M)
The RTL8111H/RTL8111HS supports Giga Lite (500M) mode that allows two link partners that both
support 1000Base-T and Giga Lite mode to transmit at 500Mbps data rate if only two pairs (AB pairs)
can be detected in the CAT.5 UTP cable. This feature is a Realtek proprietary feature and it conforms to
the 802.3az-2010(EEE) specification.
From RTL8111 datasheet

I wonder if any switch ( even unmanaged ) support this ...
 
  • Like
Reactions: Ipse and dazagrt

dazagrt

Active Member
Mar 1, 2021
195
97
28
1G lite is a mode implemented by Realtek so that Gigabit NICs can connect at 500Mb/s using just 2 pairs in cat5 or newer.
Code:
7.12. Giga Lite (500M)
The RTL8111H/RTL8111HS supports Giga Lite (500M) mode that allows two link partners that both
support 1000Base-T and Giga Lite mode to transmit at 500Mbps data rate if only two pairs (AB pairs)
can be detected in the CAT.5 UTP cable. This feature is a Realtek proprietary feature and it conforms to
the 802.3az-2010(EEE) specification.
From RTL8111 datasheet

I wonder if any switch ( even unmanaged ) support this ...
So how does fit in with our “io-crest” NIC that is meant to be using Intel i225 version 3 chips?
 

GreenAvacado

Active Member
Sep 25, 2022
121
48
28
I've wanted to install M.2 -> 2xSATA converter in my V3 box. It actually worked just fine but blocked NVME port, so I've tried to use similar ribbon cable from aliexpress:
And while BIOS was still able to detect SATA disks connected to SATA adapter, box would not boot anymore from attached SATA disk :( So my experience with ribbon M.2 extender was negative.
Also I discovered that it would be difficult to cram extra cable/adapter into already limited space inside the box (especially if you're adding extra fan). I think adding riser studs on 3D printing custom bottom lid would be better options.
Wait, want to make sure I understood you right.

With two M.2 NVMEs, second one installed with ribbon cable extension, your third SATA SDD shows up in BIOS but you can't boot off of it?

Could you confirm all three drives were visible in BIOS?

From why I recall, on certain machine second NVME and SATA ports of often multiplexed, meaning they share the same high-speed lane. So this could be BIOS related issue not the ribbon cable.
 

dums

Member
Aug 14, 2022
48
27
18
Wait, want to make sure I understood you right.

With two M.2 NVMEs, second one installed with ribbon cable extension, your third SATA SDD shows up in BIOS but you can't boot off of it?

Could you confirm all three drives were visible in BIOS?

From why I recall, on certain machine second NVME and SATA ports of often multiplexed, meaning they share the same high-speed lane. So this could be BIOS related issue not the ribbon cable.
In my case I'm positive it was ribbon cable:
Case 1:
I've removed NVME drive and installed this card in slot "under" NVME:
Connected SATA ssd to one of SATA ports on adapter card.
Was able to install and boot Ubuntu from ssd (NVME still removed since SATA adapter is now blocking second M.2 port).

Case 2:
NVME drive still removed, but SATA adapter card is connected via ribbon cable. BIOS can see ssd drive connected to adapter, but Ubuntu crashes during the boot.

Case 2 is identical to Case 1 with exception of ribbon cable. Case 2 crashed during the boot.
Going back to Case 1 (no reinstall), Ubuntu boots just fine.
I've had/tested just one cable, so it might be just bad luck with poor QC from our China friends.
 
  • Like
Reactions: GreenAvacado

GreenAvacado

Active Member
Sep 25, 2022
121
48
28
In my case I'm positive it was ribbon cable:
Case 1:
I've removed NVME drive and installed this card in slot "under" NVME:
Connected SATA ssd to one of SATA ports on adapter card.
Was able to install and boot Ubuntu from ssd (NVME still removed since SATA adapter is now blocking second M.2 port).

Case 2:
NVME drive still removed, but SATA adapter card is connected via ribbon cable. BIOS can see ssd drive connected to adapter, but Ubuntu crashes during the boot.

Case 2 is identical to Case 1 with exception of ribbon cable. Case 2 crashed during the boot.
Going back to Case 1 (no reinstall), Ubuntu boots just fine.
I've had/tested just one cable, so it might be just bad luck with poor QC from our China friends.
Got it, that makes lot more sense now.

Case 2 should definitely work, all else being equal. Sounds like it could be signal integrity issue running 8Gbps signal over those ribbon cable. Oh well, thanks for chiming.

Guess, I will have to get rid of the 40mm fan if I decide to go with the second NVME then.

Anybody here done experiments to quantify temperature delta with and without the fan?
 

sko

Active Member
Jun 11, 2021
227
121
43
Anybody here done experiments to quantify temperature delta with and without the fan?
I have installed sunon MF40101V21000UA99 fans in 2 of the units that will run at places without AC.

I had to switch to FreeBSD for now, due to botched ACPI on the BIOS side that endlessly fires ~9000 interrupts per second on OpenBSD (GPE L6F - a problem relatively common with other cheap-ass mainbords/vendors like ASrock...). FreeBSD seems to have mitigated this already; otherwise it would have been very simple to just override the AML.
Another side effect of running FreeBSD (apart from not having 70-80% interrupt load on one core): thanks to powerd the CPU is running on much lower average clock (I already reduced the graphics unit to 200Mhz at the BIOS; woud have completely disabled it if possible...) and thus lower temperatures compared to OpenBSD.

Unit without fan:
Code:
root@thu-gw1:~ # sysctl -a | grep temperat
hw.acpi.thermal.tz0.temperature: 27.9C
dev.cpu.3.temperature: 49.0C
dev.cpu.2.temperature: 49.0C
dev.cpu.1.temperature: 48.0C
dev.cpu.0.temperature: 48.0C
root@thu-gw1:~ # smartctl -A /dev/nvme0 | grep Celsius
Temperature: 67 Celsius
with fan:
Code:
root@a-vpn1:~ # sysctl -a | grep temperat
hw.acpi.thermal.tz0.temperature: 27.9C
dev.cpu.3.temperature: 43.0C
dev.cpu.2.temperature: 43.0C
dev.cpu.1.temperature: 41.0C
dev.cpu.0.temperature: 41.0C
root@a-vpn1:~ # smartctl -A /dev/nvme0 | grep Celsius
Temperature: 50 Celsius
Both are running at ~23°C room temperature.

So its a ~7° difference for the CPU and ~15-20° for the NVMe.
I actually only cared about the NVMEs - the CPU would be perfectly fine at 70° or 80° as there is still plenty of headroom to the actual TJmax of the SoC (105°C), so there is absolutely no need to add a fan just for the CPU.
If you cram some samsung NVME in there, which seem to be running A LOT hotter than other vendors, you might *have* to use a fan, especially because samsung starts thermal throttling relatively early and aggressively...
 
  • Like
Reactions: GreenAvacado

dums

Member
Aug 14, 2022
48
27
18
If you cram some samsung NVME in there, which seem to be running A LOT hotter than other vendors, you might *have* to use a fan, especially because samsung starts thermal throttling relatively early and aggressively...
What kind of samsung drive are you using?
 

sko

Active Member
Jun 11, 2021
227
121
43
What kind of samsung drive are you using?
in those networking appliances? none. *because* they run _much_ hotter and have much higher power consumption. I'm using WD SN530s in those topton units.
 

GreenAvacado

Active Member
Sep 25, 2022
121
48
28
I have installed sunon MF40101V21000UA99 fans in 2 of the units that will run at places without AC.

I had to switch to FreeBSD for now, due to botched ACPI on the BIOS side that endlessly fires ~9000 interrupts per second on OpenBSD (GPE L6F - a problem relatively common with other cheap-ass mainbords/vendors like ASrock...). FreeBSD seems to have mitigated this already; otherwise it would have been very simple to just override the AML.
Another side effect of running FreeBSD (apart from not having 70-80% interrupt load on one core): thanks to powerd the CPU is running on much lower average clock (I already reduced the graphics unit to 200Mhz at the BIOS; woud have completely disabled it if possible...) and thus lower temperatures compared to OpenBSD.

Unit without fan:
Code:
root@thu-gw1:~ # sysctl -a | grep temperat
hw.acpi.thermal.tz0.temperature: 27.9C
dev.cpu.3.temperature: 49.0C
dev.cpu.2.temperature: 49.0C
dev.cpu.1.temperature: 48.0C
dev.cpu.0.temperature: 48.0C
root@thu-gw1:~ # smartctl -A /dev/nvme0 | grep Celsius
Temperature: 67 Celsius
with fan:
Code:
root@a-vpn1:~ # sysctl -a | grep temperat
hw.acpi.thermal.tz0.temperature: 27.9C
dev.cpu.3.temperature: 43.0C
dev.cpu.2.temperature: 43.0C
dev.cpu.1.temperature: 41.0C
dev.cpu.0.temperature: 41.0C
root@a-vpn1:~ # smartctl -A /dev/nvme0 | grep Celsius
Temperature: 50 Celsius
Both are running at ~23°C room temperature.

So its a ~7° difference for the CPU and ~15-20° for the NVMe.
I actually only cared about the NVMEs - the CPU would be perfectly fine at 70° or 80° as there is still plenty of headroom to the actual TJmax of the SoC (105°C), so there is absolutely no need to add a fan just for the CPU.
If you cram some samsung NVME in there, which seem to be running A LOT hotter than other vendors, you might *have* to use a fan, especially because samsung starts thermal throttling relatively early and aggressively...
Thanks, this is super useful info.

Couple of comments and questions:

1. About NVME temperature, was it a Samsung drive without heatsink? From my personal experience, Samsung 970 EVO vs. Intel 660, Intel QLC runs much cooler, so if anybody is concerned about temperature on these rather cramped box, go with Intel NVME.

2. About your comment on using fan with NVME, these tiny N5105 box only run PCI in x1 Gen3 lane configuration. Do you really think without a fan, drive will go into thermal throttle mode given it's running only at 25% of stated transfer capability?
 

cat2devnull

Member
Jun 30, 2022
29
32
13
What kind of samsung drive are you using?
I am using a 970 Evo Plus and a Seagate BarraCuda 510.
Judging SSD heat/temp/power profiles is almost impossible.
According to the docs, the 970 peaks at 9W vs 5.3W for the 510.
But if you look at the supported power modes in the SMART tables then 970 in full power (mode 0) peaks at 7.8W and the 510 at 9.48W.

Then when it comes to temps, it depends where the vendor has located the temp sensor;
PCB vs Controller vs DRAM cache vs NAND
And which of those is the most important to pay attention to, if you can see them at all. Some vendors will publish via SMART one, all or a combined hybrid of multiple. It's really hard to tell what's what.

In my system there is consistently a 6-8degC difference in temps as reported via SMART, with the 970 being the hotter drive.

About your comment on using fan with NVME, these tiny N5105 box only run PCI in x1 Gen3 lane configuration. Do you really think without a fan, drive will go into thermal throttle mode given it's running only at 25% of stated transfer capability?
In traditional useage patterns then I agree that the drives do not get overly hot but in a system where for example you perform a parity check and read the drive from beginning to end in one go, then even connected at Gen3 x1 can cause the drives to bake (although probably not throttle).
My 970 will hit >75deg during the 30min parity check and the 510 will tend to settle in at 65deg.
Both drives have heatsinks attached but in a case with only passive cooling, heatsinks behave as the name suggests. They only act as a sink for the thermal load. The peak temp is within 1-2deg with/without, but without a heatsink you hit the peak in a few min. With a heatsink it takes 10-15min.

Just my 2c. :)
 

GreenAvacado

Active Member
Sep 25, 2022
121
48
28
In traditional useage patterns then I agree that the drives do not get overly hot but in a system where for example you perform a parity check and read the drive from beginning to end in one go, then even connected at Gen3 x1 can cause the drives to bake (although probably not throttle).
My 970 will hit >75deg during the 30min parity check and the 510 will tend to settle in at 65deg.
Both drives have heatsinks attached but in a case with only passive cooling, heatsinks behave as the name suggests. They only act as a sink for the thermal load. The peak temp is within 1-2deg with/without, but without a heatsink you hit the peak in a few min. With a heatsink it takes 10-15min.

Just my 2c. :)
Ouch on >75C.

Just read a comment by Crucial rep on Amazon Q&A about NVME that states operating temp is 0 - 70C. Temperature above 70C will void the warranty o_O

I hope Samsung is more lax about this warranty clause.
 

cat2devnull

Member
Jun 30, 2022
29
32
13
Seagate;
Warning Comp. Temp. Threshold: 75 Celsius
Critical Comp. Temp. Threshold: 80 Celsius

Samsung;
Warning Comp. Temp. Threshold: 85 Celsius
Critical Comp. Temp. Threshold: 85 Celsius

Each drive keeps a log of hours in operation at each temp threshold.
I'm not sure if they throttle at their warning or critical temp, not that there is a difference for Samsung.

I work my drives pretty hard as cache drives for a heavily used NAS and also for multi camera Frigate DVR so I am probably cranking more TBW than the average user. That's why I tend to stick with Samsung because the TBW warranty is significantly more generous than most other vendors and they are pretty price competitive in the high end Gen3 market .
The only negative, is they are slow to process a warranty claim, usually taking 2-3 months, so I normally buy a replacement drive and then claim the warranty as a refund rather than a replacement. They have never complained about the drive having been run hot, I doubt they even check.
 
  • Like
Reactions: GreenAvacado