Minisforum MS-01 Review The 10GbE with PCIe Slot Mini PC

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fazio

New Member
Dec 20, 2022
28
21
3
I wanted to tweak the fan curves (the defaults are very peaky - the constant change of pitch was distracting me).

After playing around in BIOS, I found a Windows Fan Control app on GitHub which auto-detected everything and seems to have a beautiful amount of tweakability.

View attachment 33969

If anyone has an even better suggestion then let me know. It would also be great to know if there's something available for Linux, so I can implement the same in Proxmox.
Did you find any way to control the fans on Proxmox?
 

stich86

Member
May 24, 2023
40
12
8
I wanted to tweak the fan curves (the defaults are very peaky - the constant change of pitch was distracting me).

After playing around in BIOS, I found a Windows Fan Control app on GitHub which auto-detected everything and seems to have a beautiful amount of tweakability.

View attachment 33969

If anyone has an even better suggestion then let me know. It would also be great to know if there's something available for Linux, so I can implement the same in Proxmox.
We need to understand which SuperIO is used for FAN
 

nicoska

Member
Feb 8, 2023
55
10
8
Hi all, just want to share also here my experience (you can find day by day saga in this thread). And sorry for the long post, but has been a week full of tests.

First of all, I really love the machine. It's small, super fast, easy to add storage, memory and pcie cards. I ordered trough Amazon DE (Germany) and it took 1 day to arrive.

The main reason for why I decided to buy it it's to replace a real power hungry dual epyc 7551 that server as my main homelab hypervisor during the last 2 years. Without any issue (apart from the energy bill).

So, I bought the "barebone" 13900H configuration, and added by myself:

  • Lexar NQ790 2 TB PCIe 4.0 SSD , M.2 2280 PCIe Gen4x4 NVMe 1.4
  • Crucial RAM 96GB Kit (2x48GB) DDR5 5600MHz
  • 1/2.5/5/10Gb SFP+ RJ45 Transceiver
I did a fresh install of the latest Proxmox 8.1.4, Kernel 6.5.11-8-pve and latest microcode installed.

Everything was going good, till the moment I started doing some intensive job (ie, starting Windows 11 VM, but also doing a lot of CPU intensive task as transcoding etc) and the system rebooted itself.

The error messages I was getting on the syslog were not consistent, here are few of them:

Code:
Feb 19 12:31:42 nicoska2 kernel: Memory failure: 0x10e1b37: unhandlable page.
Feb 19 12:32:49 nicoska2 kernel: mce_notify_irq: 1 callbacks suppressed
Feb 19 12:32:49 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 19 12:32:51 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
-- Reboot -
Code:
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 kernel: veth106i0 (unregistering): left allmulticast mode
Feb 19 23:45:58 nicoska2 kernel: veth106i0 (unregistering): left promiscuous mode
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 audit[29954]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29954 comm="apparmor_parser"
Feb 19 23:45:58 nicoska2 kernel: audit: type=1400 audit(1708382758.447:74): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29954 comm="apparmor_parser"
Feb 19 23:45:58 nicoska2 kernel: EXT4-fs (dm-11): unmounting filesystem fd237a8e-f4aa-49a7-97b0-c7123fb0c218.
Feb 19 23:45:58 nicoska2 systemd[1]: pve-container@106.service: Deactivated successfully.
Feb 19 23:45:58 nicoska2 systemd[1]: Stopped pve-container@106.service - PVE LXC Container: 106.
Feb 19 23:45:58 nicoska2 systemd[1]: Started pve-container@106.service - PVE LXC Container: 106.
Feb 19 23:45:59 nicoska2 kernel: EXT4-fs (dm-11): mounted filesystem fd237a8e-f4aa-49a7-97b0-c7123fb0c218 r/w with ordered data mode. Quota mode: none.
Feb 19 23:45:59 nicoska2 audit[29981]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29981 comm="apparmor_parser"
Feb 19 23:45:59 nicoska2 kernel: audit: type=1400 audit(1708382759.415:75): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29981 comm="apparmor_parser"
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered blocking state
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:59 nicoska2 kernel: veth106i0: entered allmulticast mode
Feb 19 23:45:59 nicoska2 kernel: veth106i0: entered promiscuous mode
Feb 19 23:45:59 nicoska2 kernel: eth0: renamed from vethiNJGQT
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered blocking state
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered forwarding state
Feb 19 23:46:01 nicoska2 kernel: nfs: Deprecated parameter 'intr'
Feb 19 23:46:03 nicoska2 pvedaemon[1951]: <root@pam> successful auth for user 'root@pam'
Feb 19 23:46:06 nicoska2 pvestatd[1921]: modified cpu set for lxc/106: 1-10,12-13,15-16,18-19
Feb 19 23:53:01 nicoska2 pvedaemon[1953]: <root@pam> successful auth for user 'root@pam'
-- Reboot --
Code:
Feb 20 09:17:01 nicoska2 CRON[281696]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 20 09:17:01 nicoska2 CRON[281697]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Feb 20 09:17:01 nicoska2 CRON[281696]: pam_unix(cron:session): session closed for user root
Feb 20 09:18:58 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:21:37 nicoska2 pvedaemon[1935]: <root@pam> successful auth for user 'root@pam'
Feb 20 09:25:11 nicoska2 pveproxy[249943]: worker exit
Feb 20 09:25:11 nicoska2 pveproxy[1949]: worker 249943 finished
Feb 20 09:25:11 nicoska2 pveproxy[1949]: starting 1 worker(s)
Feb 20 09:25:11 nicoska2 pveproxy[1949]: worker 287436 started
Feb 20 09:31:54 nicoska2 pveproxy[266452]: worker exit
Feb 20 09:31:54 nicoska2 pveproxy[1949]: worker 266452 finished
Feb 20 09:31:54 nicoska2 pveproxy[1949]: starting 1 worker(s)
Feb 20 09:31:54 nicoska2 pveproxy[1949]: worker 291984 started
Feb 20 09:34:01 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:34:03 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:34:15 nicoska2 kernel: RAS: Soft-offlining pfn: 0x10b3e10
Feb 20 09:34:15 nicoska2 kernel: Memory failure: 0x10b3e10: unhandlable page.
-- Reboot --
Code:
eb 20 17:44:16 nicoska2 pvedaemon[14727]: start VM 107: UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam:
Feb 20 17:44:16 nicoska2 pvedaemon[1948]: <root@pam> starting task UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam:
Feb 20 17:44:16 nicoska2 systemd[1]: Created slice qemu.slice - Slice /qemu.
Feb 20 17:44:16 nicoska2 systemd[1]: Started 107.scope.
Feb 20 17:44:17 nicoska2 kernel: tap107i0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: fwpr107p0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwpr107p0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered forwarding state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: fwln107i0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwln107i0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered forwarding state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: tap107i0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered forwarding state
Feb 20 17:44:17 nicoska2 pvedaemon[1948]: <root@pam> end task UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam: OK
-- Reboot --
1708448116831.png

Code:
Feb 20 22:40:24 nicoska2 pvedaemon[1943]: <root@pam> end task UPID:nicoska2:00003AC3:00009039:65D51C47:qmstart:107:root@pam: OK
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 2/KVM/15145 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 7/KVM/15150 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 5/KVM/15148 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 1/KVM/15144 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 4/KVM/15147 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 6/KVM/15149 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 3/KVM/15146 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 9/KVM/15152 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 8/KVM/15151 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:32 nicoska2 pvedaemon[15287]: starting vnc proxy UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:32 nicoska2 pvedaemon[1944]: <root@pam> starting task UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:33 nicoska2 pvedaemon[1945]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout
Feb 20 22:40:37 nicoska2 pveproxy[1960]: detected empty handle
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 4: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 5: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 9: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 8: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 2: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 6: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 1: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 3: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 7: requested 19791 ns lapic timer period limited to 200000 ns
-- Reboot --
Code:
Feb 20 22:40:24 nicoska2 pvedaemon[1943]: <root@pam> end task UPID:nicoska2:00003AC3:00009039:65D51C47:qmstart:107:root@pam: OK
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 2/KVM/15145 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 7/KVM/15150 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 5/KVM/15148 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 1/KVM/15144 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 4/KVM/15147 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 6/KVM/15149 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 3/KVM/15146 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 9/KVM/15152 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 8/KVM/15151 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:32 nicoska2 pvedaemon[15287]: starting vnc proxy UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:32 nicoska2 pvedaemon[1944]: <root@pam> starting task UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:33 nicoska2 pvedaemon[1945]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout
Feb 20 22:40:37 nicoska2 pveproxy[1960]: detected empty handle
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 4: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 5: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 9: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 8: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 2: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 6: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 1: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 3: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 7: requested 19791 ns lapic timer period limited to 200000 ns
-- Reboot --
What I tried so far:

  1. Switch Ram module: system crash on heavy load
  2. BIOS factory reset: system crash on heavy load
  3. Run Memtest86+ : no error, test passed, system crash on heavy load
  4. Removed the SFP+ transceiver: system crash on heavy load
  5. Disabled (in BIOS) C-State and SpeedShift: system crash on heavy load
  6. Changed TDP limit(in BIOS):
    1. PL1: 60000
      PL2: 80000

      I changed to:

      PL1: 40000 system crash on heavy load
      PL2: 60000 system crash on heavy load
  7. Disable Efficiency cores (in BIOS), now running only 12 P cores: system work perfectly also in heavy load

So, my question now is, do you guys think that I received a bad unit with a faulty CPU? I still have time to send it back to Amazon.
What would you do in my situation? Are there any additional test that I could perform?

Thanks for reading this long post.

Nico
 

Fazio

New Member
Dec 20, 2022
28
21
3
Hi all, just want to share also here my experience (you can find day by day saga in this thread). And sorry for the long post, but has been a week full of tests.

First of all, I really love the machine. It's small, super fast, easy to add storage, memory and pcie cards. I ordered trough Amazon DE (Germany) and it took 1 day to arrive.

The main reason for why I decided to buy it it's to replace a real power hungry dual epyc 7551 that server as my main homelab hypervisor during the last 2 years. Without any issue (apart from the energy bill).

So, I bought the "barebone" 13900H configuration, and added by myself:

  • Lexar NQ790 2 TB PCIe 4.0 SSD , M.2 2280 PCIe Gen4x4 NVMe 1.4
  • Crucial RAM 96GB Kit (2x48GB) DDR5 5600MHz
  • 1/2.5/5/10Gb SFP+ RJ45 Transceiver
I did a fresh install of the latest Proxmox 8.1.4, Kernel 6.5.11-8-pve and latest microcode installed.

Everything was going good, till the moment I started doing some intensive job (ie, starting Windows 11 VM, but also doing a lot of CPU intensive task as transcoding etc) and the system rebooted itself.

The error messages I was getting on the syslog were not consistent, here are few of them:

Code:
Feb 19 12:31:42 nicoska2 kernel: Memory failure: 0x10e1b37: unhandlable page.
Feb 19 12:32:49 nicoska2 kernel: mce_notify_irq: 1 callbacks suppressed
Feb 19 12:32:49 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 19 12:32:51 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
-- Reboot -
Code:
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 kernel: veth106i0 (unregistering): left allmulticast mode
Feb 19 23:45:58 nicoska2 kernel: veth106i0 (unregistering): left promiscuous mode
Feb 19 23:45:58 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:58 nicoska2 audit[29954]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29954 comm="apparmor_parser"
Feb 19 23:45:58 nicoska2 kernel: audit: type=1400 audit(1708382758.447:74): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29954 comm="apparmor_parser"
Feb 19 23:45:58 nicoska2 kernel: EXT4-fs (dm-11): unmounting filesystem fd237a8e-f4aa-49a7-97b0-c7123fb0c218.
Feb 19 23:45:58 nicoska2 systemd[1]: pve-container@106.service: Deactivated successfully.
Feb 19 23:45:58 nicoska2 systemd[1]: Stopped pve-container@106.service - PVE LXC Container: 106.
Feb 19 23:45:58 nicoska2 systemd[1]: Started pve-container@106.service - PVE LXC Container: 106.
Feb 19 23:45:59 nicoska2 kernel: EXT4-fs (dm-11): mounted filesystem fd237a8e-f4aa-49a7-97b0-c7123fb0c218 r/w with ordered data mode. Quota mode: none.
Feb 19 23:45:59 nicoska2 audit[29981]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29981 comm="apparmor_parser"
Feb 19 23:45:59 nicoska2 kernel: audit: type=1400 audit(1708382759.415:75): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=29981 comm="apparmor_parser"
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered blocking state
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered disabled state
Feb 19 23:45:59 nicoska2 kernel: veth106i0: entered allmulticast mode
Feb 19 23:45:59 nicoska2 kernel: veth106i0: entered promiscuous mode
Feb 19 23:45:59 nicoska2 kernel: eth0: renamed from vethiNJGQT
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered blocking state
Feb 19 23:45:59 nicoska2 kernel: vmbr0: port 7(veth106i0) entered forwarding state
Feb 19 23:46:01 nicoska2 kernel: nfs: Deprecated parameter 'intr'
Feb 19 23:46:03 nicoska2 pvedaemon[1951]: <root@pam> successful auth for user 'root@pam'
Feb 19 23:46:06 nicoska2 pvestatd[1921]: modified cpu set for lxc/106: 1-10,12-13,15-16,18-19
Feb 19 23:53:01 nicoska2 pvedaemon[1953]: <root@pam> successful auth for user 'root@pam'
-- Reboot --
Code:
Feb 20 09:17:01 nicoska2 CRON[281696]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 20 09:17:01 nicoska2 CRON[281697]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Feb 20 09:17:01 nicoska2 CRON[281696]: pam_unix(cron:session): session closed for user root
Feb 20 09:18:58 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:21:37 nicoska2 pvedaemon[1935]: <root@pam> successful auth for user 'root@pam'
Feb 20 09:25:11 nicoska2 pveproxy[249943]: worker exit
Feb 20 09:25:11 nicoska2 pveproxy[1949]: worker 249943 finished
Feb 20 09:25:11 nicoska2 pveproxy[1949]: starting 1 worker(s)
Feb 20 09:25:11 nicoska2 pveproxy[1949]: worker 287436 started
Feb 20 09:31:54 nicoska2 pveproxy[266452]: worker exit
Feb 20 09:31:54 nicoska2 pveproxy[1949]: worker 266452 finished
Feb 20 09:31:54 nicoska2 pveproxy[1949]: starting 1 worker(s)
Feb 20 09:31:54 nicoska2 pveproxy[1949]: worker 291984 started
Feb 20 09:34:01 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:34:03 nicoska2 kernel: mce: [Hardware Error]: Machine check events logged
Feb 20 09:34:15 nicoska2 kernel: RAS: Soft-offlining pfn: 0x10b3e10
Feb 20 09:34:15 nicoska2 kernel: Memory failure: 0x10b3e10: unhandlable page.
-- Reboot --
Code:
eb 20 17:44:16 nicoska2 pvedaemon[14727]: start VM 107: UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam:
Feb 20 17:44:16 nicoska2 pvedaemon[1948]: <root@pam> starting task UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam:
Feb 20 17:44:16 nicoska2 systemd[1]: Created slice qemu.slice - Slice /qemu.
Feb 20 17:44:16 nicoska2 systemd[1]: Started 107.scope.
Feb 20 17:44:17 nicoska2 kernel: tap107i0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: fwpr107p0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwpr107p0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: vmbr0: port 17(fwpr107p0) entered forwarding state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: fwln107i0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwln107i0: entered promiscuous mode
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 1(fwln107i0) entered forwarding state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered disabled state
Feb 20 17:44:17 nicoska2 kernel: tap107i0: entered allmulticast mode
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered blocking state
Feb 20 17:44:17 nicoska2 kernel: fwbr107i0: port 2(tap107i0) entered forwarding state
Feb 20 17:44:17 nicoska2 pvedaemon[1948]: <root@pam> end task UPID:nicoska2:00003987:00007C2C:65D4D6E0:qmstart:107:root@pam: OK
-- Reboot --
View attachment 34837

Code:
Feb 20 22:40:24 nicoska2 pvedaemon[1943]: <root@pam> end task UPID:nicoska2:00003AC3:00009039:65D51C47:qmstart:107:root@pam: OK
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 2/KVM/15145 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 7/KVM/15150 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 5/KVM/15148 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 1/KVM/15144 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 4/KVM/15147 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 6/KVM/15149 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 3/KVM/15146 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 9/KVM/15152 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 8/KVM/15151 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:32 nicoska2 pvedaemon[15287]: starting vnc proxy UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:32 nicoska2 pvedaemon[1944]: <root@pam> starting task UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:33 nicoska2 pvedaemon[1945]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout
Feb 20 22:40:37 nicoska2 pveproxy[1960]: detected empty handle
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 4: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 5: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 9: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 8: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 2: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 6: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 1: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 3: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 7: requested 19791 ns lapic timer period limited to 200000 ns
-- Reboot --
Code:
Feb 20 22:40:24 nicoska2 pvedaemon[1943]: <root@pam> end task UPID:nicoska2:00003AC3:00009039:65D51C47:qmstart:107:root@pam: OK
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 2/KVM/15145 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 7/KVM/15150 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 5/KVM/15148 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 1/KVM/15144 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 4/KVM/15147 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 6/KVM/15149 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 3/KVM/15146 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 9/KVM/15152 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:24 nicoska2 kernel: x86/split lock detection: #AC: CPU 8/KVM/15151 took a split_lock trap at address: 0x7eebd050
Feb 20 22:40:32 nicoska2 pvedaemon[15287]: starting vnc proxy UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:32 nicoska2 pvedaemon[1944]: <root@pam> starting task UPID:nicoska2:00003BB7:000093BD:65D51C50:vncproxy:107:root@pam:
Feb 20 22:40:33 nicoska2 pvedaemon[1945]: VM 107 qmp command failed - VM 107 qmp command 'guest-ping' failed - got timeout
Feb 20 22:40:37 nicoska2 pveproxy[1960]: detected empty handle
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 4: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 5: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 9: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 8: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 2: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 6: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 1: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 3: requested 19791 ns lapic timer period limited to 200000 ns
Feb 20 22:40:48 nicoska2 kernel: kvm: vcpu 7: requested 19791 ns lapic timer period limited to 200000 ns
-- Reboot --
What I tried so far:

  1. Switch Ram module: system crash on heavy load
  2. BIOS factory reset: system crash on heavy load
  3. Run Memtest86+ : no error, test passed, system crash on heavy load
  4. Removed the SFP+ transceiver: system crash on heavy load
  5. Disabled (in BIOS) C-State and SpeedShift: system crash on heavy load
  6. Changed TDP limit(in BIOS):
    1. PL1: 60000
      PL2: 80000

      I changed to:

      PL1: 40000 system crash on heavy load
      PL2: 60000 system crash on heavy load
  7. Disable Efficiency cores (in BIOS), now running only 12 P cores: system work perfectly also in heavy load

So, my question now is, do you guys think that I received a bad unit with a faulty CPU? I still have time to send it back to Amazon.
What would you do in my situation? Are there any additional test that I could perform?

Thanks for reading this long post.

Nico
I would replace it immediately since you bough the unit on Amazon.
 
  • Like
Reactions: nicoska

FingerBlaster

Member
Feb 27, 2019
92
42
18
Just for the sake of argument, will it crash if you are running ecores only?

Also how many cores have you given your windows vm?
 

ms264556

Well-Known Member
Sep 13, 2021
358
288
63
New Zealand
ms264556.net
We need to understand which SuperIO is used for FAN
On Windows (since I ended up running proxmox nested in hyper-v)...
FanControl has its own identifiers (/lpc/nct6796dr/fan/0 & /lpc/nct6796dr/control/0 to sense/control the CPU fan and /lpc/nct6796dr/fan/1 & /lpc/nct6796dr/control/1 for the storage fan).

HWiNFO64 shows the actual SuperIO as a Nuvoton NCT5585D.
 

stich86

Member
May 24, 2023
40
12
8
On Windows (since I ended up running proxmox nested in hyper-v)...
FanControl has its own identifiers (/lpc/nct6796dr/fan/0 & /lpc/nct6796dr/control/0 to sense/control the CPU fan and /lpc/nct6796dr/fan/1 & /lpc/nct6796dr/control/1 for the storage fan).

HWiNFO64 shows the actual SuperIO as a Nuvoton NCT5585D.
looks like that in Proxmox fan speed can be readed after loading nct6798 module:

Code:
nct6798-isa-0a20
Adapter: ISA adapter
in0:                   328.00 mV (min =  +0.00 V, max =  +1.74 V)
in1:                     1.04 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in2:                     3.34 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in3:                     3.34 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in4:                     1.10 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in5:                   136.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in6:                   120.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in7:                     3.34 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in8:                     3.14 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in9:                     1.02 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in10:                  144.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in11:                  112.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in12:                    1.01 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in13:                  144.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in14:                    1.28 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
fan1:                  2073 RPM  (min =    0 RPM)
fan2:                  1419 RPM  (min =    0 RPM)
fan3:                     0 RPM  (min =    0 RPM)
fan4:                     0 RPM  (min =    0 RPM)
fan5:                     0 RPM  (min =    0 RPM)
fan7:                     0 RPM  (min =    0 RPM)
SYSTIN:                +121.0°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor
CPUTIN:                 +34.5°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor
AUXTIN0:               +112.0°C    sensor = thermistor
AUXTIN1:               +115.0°C    sensor = thermistor
AUXTIN2:               +117.0°C    sensor = thermistor
AUXTIN3:                -41.0°C    sensor = thermal diode
PECI Agent 0:           +37.0°C  (high = +80.0°C, hyst = +75.0°C)
                                 (crit = +100.0°C)
PCH_CHIP_CPU_MAX_TEMP:   +0.0°C
PCH_CHIP_TEMP:           +0.0°C
PCH_CPU_TEMP:            +0.0°C
intrusion0:            ALARM
intrusion1:            ALARM
beep_enable:           disabled

nvme-pci-0100
Adapter: PCI adapter
Composite:    +31.9°C  (low  =  -0.1°C, high = +69.8°C)
                       (crit = +84.8°C)
Not sure if it can be controlled by Linux
 

stich86

Member
May 24, 2023
40
12
8
ok looks like that with "fancontrol" you can manage FANs on PVE:

Code:
# pwmconfig version 3.6.0
This program will search your sensors for pulse width modulation (pwm)
controls, and test each one to see if it controls a fan on
your motherboard. Note that many motherboards do not have pwm
circuitry installed, even if your sensor chip supports pwm.

We will attempt to briefly stop each fan using the pwm controls.
The program will attempt to restore each fan to full speed
after testing. However, it is ** very important ** that you
physically verify that the fans have been to full speed
after the program has completed.

Found the following devices:
   hwmon0 is acpitz
   hwmon1 is nvme
   hwmon2 is coretemp
   hwmon3 is nct6798

Found the following PWM controls:
   hwmon3/pwm1           current value: 70
hwmon3/pwm1 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y
   hwmon3/pwm2           current value: 54
hwmon3/pwm2 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y
   hwmon3/pwm3           current value: 153
   hwmon3/pwm4           current value: 153
   hwmon3/pwm5           current value: 153
   hwmon3/pwm7           current value: 70
hwmon3/pwm7 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y

Giving the fans some time to reach full speed...
Found the following fan sensors:
   hwmon3/fan1_input     current speed: 4639 RPM
   hwmon3/fan2_input     current speed: 4041 RPM
   hwmon3/fan3_input     current speed: 0 ... skipping!
   hwmon3/fan4_input     current speed: 0 ... skipping!
   hwmon3/fan5_input     current speed: 0 ... skipping!
   hwmon3/fan7_input     current speed: 0 ... skipping!

Warning!!! This program will stop your fans, one at a time,
for approximately 5 seconds each!!!
This may cause your processor temperature to rise!!!
If you do not want to do this hit control-C now!!!
Hit return to continue:

Testing pwm control hwmon3/pwm1 ...
  hwmon3/fan1_input ... speed was 4639 now 0
    It appears that fan hwmon3/fan1_input
    is controlled by pwm hwmon3/pwm1
Would you like to generate a detailed correlation (y)? y
    PWM 255 FAN 4639
    PWM 240 FAN 4485
    PWM 225 FAN 4326
    PWM 210 FAN 4153
    PWM 195 FAN 3994
    PWM 180 FAN 3835
    PWM 165 FAN 3648
    PWM 150 FAN 3443
    PWM 135 FAN 3229
    PWM 120 FAN 3006
    PWM 105 FAN 2760
    PWM 90 FAN 2490
    PWM 75 FAN 2191
    PWM 60 FAN 1867
    PWM 45 FAN 1490
    PWM 30 FAN 1030
    PWM 28 FAN 944
    PWM 26 FAN 880
    PWM 24 FAN 802
    PWM 22 FAN 707
    PWM 20 FAN 622
    PWM 18 FAN 588
    PWM 16 FAN 524
    PWM 14 FAN 409
    PWM 12 FAN 0
    Fan Stopped at PWM = 12

  hwmon3/fan2_input ... speed was 4041 now 4179
    no correlation

Testing pwm control hwmon3/pwm2 ...
  hwmon3/fan1_input ... speed was 4639 now 4639
    no correlation
  hwmon3/fan2_input ... speed was 4041 now 542
    It appears that fan hwmon3/fan2_input
    is controlled by pwm hwmon3/pwm2
Would you like to generate a detailed correlation (y)? y
    PWM 255 FAN 3760
    PWM 240 FAN 3739
    PWM 225 FAN 3770
    PWM 210 FAN 3358
    PWM 195 FAN 3221
    PWM 180 FAN 3075
    PWM 165 FAN 3169
    PWM 150 FAN 2878
    PWM 135 FAN 2800
    PWM 120 FAN 2631
    PWM 105 FAN 2335
    PWM 90 FAN 2112
    PWM 75 FAN 1928
    PWM 60 FAN 1578
    PWM 45 FAN 1248
    PWM 30 FAN 794
    PWM 28 FAN 730
    PWM 26 FAN 791
    PWM 24 FAN 731
    PWM 22 FAN 791
    PWM 20 FAN 734
    PWM 18 FAN 215
    PWM 16 FAN 0
    Fan Stopped at PWM = 16


Testing pwm control hwmon3/pwm3 ...
  hwmon3/fan1_input ... speed was 4639 now 4623
    no correlation
  hwmon3/fan2_input ... speed was 4041 now 3835
    no correlation
 

stich86

Member
May 24, 2023
40
12
8
just need to understand FAN1 and FAN2 which are.. currently i'm connected remotely, so the test are not so much useful
I'll try to figure out best setup for my needs :)

looks like:

FAN1 = CPU
FAN2 = SSD

FAN1@3400rpm keeps CPU at 30/33°C
FAN2@1500rpm keeps SSD at 29°C

Currently my unit is not in good place with correct airflow, so I've to put in the correct way to tune up these values
 
Last edited:

DaveLTX

Active Member
Dec 5, 2021
170
40
28
More likely 4 lanes. (which is more than sufficient for full 2-port bandwidth)
I don't believe that this chipset (or, in fact, any in last 5+ years) allows more than x4 for any root port. [i'm very interested to hear of an exception]

@mach3.2 , you're probably thinking: If x4 (Gen3) lanes is more than enough, why did Intel design the chip for x8 lanes? Well, Intel doesn't make simple mistakes--so think a little more. :)
=====
[of course, INTC has made a few BIG mistakes.]
Because X710 shares silicon design with XL710... and what if someone hooks it up to gen2? its still more than enough but doesn't hurt to have it.
Anyway so it shares silicon with XL710 and they found it pointless to disable the lanes?
 
  • Like
Reactions: mach3.2

DaveLTX

Active Member
Dec 5, 2021
170
40
28
I ordered on the 10th. If I remember right I saw a ship date of Jan 20th, nearly had a heart attack looking at the site now and it saying March (I'm moving in March). Googled a bit and looks like pre Jan 15th are not affected by the new date. Anyways looking forward to playing with it.

My setup will be 64gigs linked by STH youtube. I didn't see anything on if it supported bifurcation, but if it does, great 2 extra m.2 slots, if not, ok 1 extra m.2 slot. I went with GLOTRENDS PA41 Quad M.2 card. Also purchased MZQL215THBLA-00A07 15tb U.2 :cool: biggest & cheapest 7mm u.2 drive I could find.

Software wise, I'll be running proxmox with plex (on hardware, don't want to bother with passthrough for quicksync) and samba, and a bunch of VMs for anything else, pfsense, unifi, whatever else I need. Primary functions, Router, NAS, and plex server.
Intel anything less than server or even Xeon E never supported bifurcation and it absolutely won't here
 

nicoska

Member
Feb 8, 2023
55
10
8
No, didn't try with only ecores, but I will and let you know.
So was able to test with all E cores enabled, and just 1 P cores enabled (it's not possible to disable all P cores) and no issue here.
Really strange.

Now I'm going back to test with all cores enabled and see what it happens
 
  • Like
Reactions: SBMe

stich86

Member
May 24, 2023
40
12
8
in meantime, i've fixed the poweroff issue updating micro-code and latest kernel on PVE 8.1.4
Now poweroff is working as expected.

I want put the machine under some heavy load before swap my custom build for production. Any suggestion to run under PVE\Linux?

Thanks!
 

Fazio

New Member
Dec 20, 2022
28
21
3
ok looks like that with "fancontrol" you can manage FANs on PVE:

Code:
# pwmconfig version 3.6.0
This program will search your sensors for pulse width modulation (pwm)
controls, and test each one to see if it controls a fan on
your motherboard. Note that many motherboards do not have pwm
circuitry installed, even if your sensor chip supports pwm.

We will attempt to briefly stop each fan using the pwm controls.
The program will attempt to restore each fan to full speed
after testing. However, it is ** very important ** that you
physically verify that the fans have been to full speed
after the program has completed.

Found the following devices:
   hwmon0 is acpitz
   hwmon1 is nvme
   hwmon2 is coretemp
   hwmon3 is nct6798

Found the following PWM controls:
   hwmon3/pwm1           current value: 70
hwmon3/pwm1 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y
   hwmon3/pwm2           current value: 54
hwmon3/pwm2 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y
   hwmon3/pwm3           current value: 153
   hwmon3/pwm4           current value: 153
   hwmon3/pwm5           current value: 153
   hwmon3/pwm7           current value: 70
hwmon3/pwm7 is currently setup for automatic speed control.
In general, automatic mode is preferred over manual mode, as
it is more efficient and it reacts faster. Are you sure that
you want to setup this output for manual control? (n) y

Giving the fans some time to reach full speed...
Found the following fan sensors:
   hwmon3/fan1_input     current speed: 4639 RPM
   hwmon3/fan2_input     current speed: 4041 RPM
   hwmon3/fan3_input     current speed: 0 ... skipping!
   hwmon3/fan4_input     current speed: 0 ... skipping!
   hwmon3/fan5_input     current speed: 0 ... skipping!
   hwmon3/fan7_input     current speed: 0 ... skipping!

Warning!!! This program will stop your fans, one at a time,
for approximately 5 seconds each!!!
This may cause your processor temperature to rise!!!
If you do not want to do this hit control-C now!!!
Hit return to continue:

Testing pwm control hwmon3/pwm1 ...
  hwmon3/fan1_input ... speed was 4639 now 0
    It appears that fan hwmon3/fan1_input
    is controlled by pwm hwmon3/pwm1
Would you like to generate a detailed correlation (y)? y
    PWM 255 FAN 4639
    PWM 240 FAN 4485
    PWM 225 FAN 4326
    PWM 210 FAN 4153
    PWM 195 FAN 3994
    PWM 180 FAN 3835
    PWM 165 FAN 3648
    PWM 150 FAN 3443
    PWM 135 FAN 3229
    PWM 120 FAN 3006
    PWM 105 FAN 2760
    PWM 90 FAN 2490
    PWM 75 FAN 2191
    PWM 60 FAN 1867
    PWM 45 FAN 1490
    PWM 30 FAN 1030
    PWM 28 FAN 944
    PWM 26 FAN 880
    PWM 24 FAN 802
    PWM 22 FAN 707
    PWM 20 FAN 622
    PWM 18 FAN 588
    PWM 16 FAN 524
    PWM 14 FAN 409
    PWM 12 FAN 0
    Fan Stopped at PWM = 12

  hwmon3/fan2_input ... speed was 4041 now 4179
    no correlation

Testing pwm control hwmon3/pwm2 ...
  hwmon3/fan1_input ... speed was 4639 now 4639
    no correlation
  hwmon3/fan2_input ... speed was 4041 now 542
    It appears that fan hwmon3/fan2_input
    is controlled by pwm hwmon3/pwm2
Would you like to generate a detailed correlation (y)? y
    PWM 255 FAN 3760
    PWM 240 FAN 3739
    PWM 225 FAN 3770
    PWM 210 FAN 3358
    PWM 195 FAN 3221
    PWM 180 FAN 3075
    PWM 165 FAN 3169
    PWM 150 FAN 2878
    PWM 135 FAN 2800
    PWM 120 FAN 2631
    PWM 105 FAN 2335
    PWM 90 FAN 2112
    PWM 75 FAN 1928
    PWM 60 FAN 1578
    PWM 45 FAN 1248
    PWM 30 FAN 794
    PWM 28 FAN 730
    PWM 26 FAN 791
    PWM 24 FAN 731
    PWM 22 FAN 791
    PWM 20 FAN 734
    PWM 18 FAN 215
    PWM 16 FAN 0
    Fan Stopped at PWM = 16


Testing pwm control hwmon3/pwm3 ...
  hwmon3/fan1_input ... speed was 4639 now 4623
    no correlation
  hwmon3/fan2_input ... speed was 4041 now 3835
    no correlation
Great! How did you load the module?
 

nicoska

Member
Feb 8, 2023
55
10
8
in meantime, i've fixed the poweroff issue updating micro-code and latest kernel on PVE 8.1.4
Now poweroff is working as expected.

I want put the machine under some heavy load before swap my custom build for production. Any suggestion to run under PVE\Linux?

Thanks!
I updatet to latest kernel, latest microcode. To test right now I'm using this script:

Code:
wget -qO- yabs.sh | bash
You can remove the network tests running this:
Code:
wget -qO- yabs.sh | bash -s -- -n -i
My MS-01 crashed on:

Running GB6 benchmark test... *cue elevator music*client_loop: send disconnect: Broken pipe