26/02/26 - Updated with more information on the NIC upgrades.
Thought I would write a post on my efforts to optimise the CWWK
CW-NAS-ADLN-K (the purple board). I pulled most of this data from various posts in the preceding 61 pages. Credit to the numerous posters who pointed me in the right direction. The info below would be generally applicable to many of the CWWK and similar boards.
Hardware Heatsink:
I started this process because my server was suffering from terrible thermals and poor power efficiency. It would idle at nearly 30w and >45degC. Under any load the CPU would hit 90degC.
First I removed the copper heatsink (4 screws) and removed the large amount of thermal paste. I replace it with 0.2mm thick PTM7950. I reattached the heatsink and then removed it again to find an incomplete impression of the GPU die. I had to add a second layer just over the GPU die to make good contact. I also used PMT7950 between the copper heatsink and my passive radiator (the board is in a 1U rack case).
BIOS Upgrade:
Then I upgraded the BIOS to the latest version. Here are the two most recent with release notes.
- CWRKA02 2025-04-22 - Release Note: Fix xHCI & Optimize power management
- CWRKA03-T1 2025-04-25 - Release Note: Fix xHCI USB / Power MGT / Beep
The installation was easy, I just made a USB with
Ventoy and dropped the .iso onto the USB.
After the system booted much faster and the boot screen says in a high res mode.
BIOS Config to Enable ASPM:
Then I booted into the BIOS and made the following changes;
- Advanced -> Power & Performance -> CPU -> Boot Performance Mode: Max Battery
- Advanced -> Power & Performance -> CPU -> Package C State Limit: C10
- Advanced -> ACPI Settings -> Enable ACPI Auto Configuration: Enabled
- Chipset -> System Agent -> Graphics Configuration - > RC1p Support: Enabled
- Chipset -> System Agent -> DMI/OPI Configuration -> DMI Gen3 ASPM: ASPM L1
- Chipset -> System Agent -> DMI/OPI Configuration -> DMI ASPM: ASPM L1
- Chipset -> PCH-IO Configuration -> PCI Express Configuration -> DMI Link ASPM Control: L1
(Repeat the following for Root Port 1, 2, 3, 4, 7, 9)
- Chipset -> PCH-IO Configuration -> PCI Express Configuration -> PCI Express Root Port 1 -> ASPM: L1
- Chipset -> PCH-IO Configuration -> PCI Express Configuration -> PCI Express Root Port 1 -> L1 Substates: L1.1 & L1.2
- Chipset -> PCH-IO Configuration -> PCI Express Configuration -> PCI Express Root Port 1 -> L1 Low: Enabled
Note: There is a bug in the BIOS and whenever you go into the BIOS it forgets the 3 PCI Express Root Port configuration settings (ASPM, L1 Substates and L1 Low) on every port. To work around this you must save the configuration to the User Defaults and then you can reboot safely. Then whenever you come into the BIOS again, make sure you Restore User Defaults (not Restore Defaults) and the root port settings will be restored. Then make any changes you need and again, save to User Defaults before exiting.
- Save & Exit -> Save as User Defaults
Optional: To enable software/OS fan control
- Advanced -> Hardware Monitor -> CPU Smart Fan Mode -> Software Mode
- Advanced -> Hardware Monitor -> Sys Fan Mode -> Software Mode
The i226 has issues when ASPM is enabled. Because I run Unraid with an OPNsense VM bound to the second NIC I had to disable ASPM for the NIC used by OPNsense. The Linux igc driver works around the issue but the FreeBSD driver does not. The PCIe layout is as follows;
- Root Port 1: 01:00.0 NVMe 1
- Root Port 2: 02:00.0 NVMe 2
- Root Port 3: 03:00.0 ASM1166 SATA Controller
- Root Port 4: 04:00.0 I226-V
- Root Port 7: 05:00.0 I226-V
- Root Port 9: 06:00.0 PCIe x4 slot
Upgrade Intel NICs
Then I upgraded the i226 NIC firmware (hoping it would fix FreeBSD - spoiler... it didn't). To do this, download the latest
Intel Ethernet Driver Pack (has the flash tool) and the latest
i226 firmware. I used the FXVL_125C_V_2MB_2.32.bin file as my board shipped with the 2MB v2.14 (FXVL_125C_V_2MB_2.14.bin) from the factory.
In the Intel Driver Pack there is a file /Release_31.0/NVMUpdatePackage/I225_NVMUpdatePackage_v1_00_Linux.tar.gz
In that archive you will find the flash tool 'nvmupdate64e'.
Before you can do the update you must create a valid nvmupdate.cfg. I was going from the 2MB v2.14 (8000028D) to the 2MB v2.32 FXVL_125C_V_2MB_2.32.bin (80000422). I basically followed the
Hung Vu guide but my nvmupdate.cfg looked like this;
Code:
CURRENT FAMILY: 1.0.0
CONFIG VERSION: 1.20.0
; NIC device
BEGIN DEVICE
DEVICENAME: Intel(R) Ethernet Controller I226-V
VENDOR: 8086
DEVICE: 125C
SUBVENDOR: 8086
SUBDEVICE: 0000
NVM IMAGE: FXVL_125C_V_2MB_2.32.bin
EEPID: 80000422
RESET TYPE: REBOOT
REPLACES: 8000028D
END DEVICE
If you want to update from/to a different version then you need to download the appropriate firmware and change
NVM IMAGE to have the correct image filename. Also update the corresponding EtrackID for the old and new firmware in the line
REPLACES and
EEPID respectively. The EtrackID can be found on the
i226 firmware repository.
Once you create your nvmupdate.cfg, then you can run the firmware update software (the executable, firmware bin file and nvmupdate.cfg must all be in the same directory;
To see the current firmware version;
nvmupdate64e -i -l
To update the firmware;
nvmupdate64e -l
The upgrade seems to fail in a variety of ways if you are using the interface or have it bound via VFIO to a VM even if the VM isn't running. When I upgraded my board it failed both first time because Unraid was using the first NIC and my OPNsense VM was bound to the second (even though the VM was shutdown). So I removed the VFIO binding, rebooted and then I could upgrade the second NIC. Then I moved the Unraid OS to use the second NIC, rebooted and then could upgrade the first.
Configure the OS:
I had to add
pcie_aspm=force to my Linux boot arguments because the motherboard advertises that it does not support ASPM so I have to override that setting.
To check what your board is advertising with this command;
dmesg | grep -i aspm
To check that ASPM is enabled for each device;
lspci -vvv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]{7}|ASPM)'
To check what ASPM mode the system is using;
cat /sys/module/pcie_aspm/parameters/policy
To set the ASPM mode;
echo "powersave" | sudo tee /sys/module/pcie_aspm/parameters/policy
I also installed powertop to tune everything. I added
powertop --auto-tune to my boot up script.
Results:
With all of this done my N305 now runs at 14w idle (35degC) and 18w under normal real world load (65degC). Power measurements are at the wall via an Athom power meter. My CPU package goes down to C3, it used to go to C6 before I disabled ASPM for the NIC. I have read that people got the board to go to C8 after upgrading the BIOS but I never checked the combo of new BIOS and all root ports running ASPM. I could do more work to get lower by disabling the onboard ASM1166 controller (I use dual NVMe drives and no SATA), turn off the audio chip, switch to a PICO PSU, etc, but it's now heading into the realm of diminishing returns. Also because my server runs a lot of services 24x7 it almost never has a chance to get to lower C states anyway.
I hope this helps anyone who wants to follow suit.