CWWK/Topton/... Nxxx quad NIC router

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

alkersan

New Member
Jun 15, 2025
18
4
3
Hello everyone,
I have a CWWK p86 n310 32gb ram. I downloaded windows on it. It seems to work ok. However the Ethernet ports seem to be disconnected by the bios. Is there anyway to fix this?
Do you see unknown devices in windows device manager? Or even better open hwinfo and see what’s on PCIe bus.
And could you also double check what cwwk model you have, or better a link?
 

Liam 89

New Member
Oct 13, 2025
2
0
1
Do you see unknown devices in windows device manager? Or even better open hwinfo and see what’s on PCIe bus.
And could you also double check what cwwk model you have, or better a link?
model MINIPC- P1
PCIe Bus CW-9X9CARD2C1105013998
I bought it from AliExpress, I have the N355 version with the 36gb ram. I upgrading the storage from 1tb to 4tb NVMe from Western Digital. https://a.aliexpress.com/_mqRRuLX

Let me know if you need anything else
 

ocny153

New Member
Apr 28, 2023
13
6
3
Hi all,



First time poster here. Been thinking of changing my home network for a while. Initially it was when thinking about installing an IP CCTV system on its own network etc (Which I haven't done yet), then when we had FTTP internet installed (500mb), but deep down I think I just want to tinker and learn about this stuff. I'm a software engineer, but the network side of things can be a mistery sometimes!



Anyway, I've just placed an order for a barebones N100 directly from cwwk.net. We'll see how long it takes to reach the UK. Judging by the pictures, it should be a variation B from the first post in this thread. I've ordered 16GB DDR5 and a 512GB nvme ssd (both Crucial) and take it from there. The plan is to install proxmox and then run Opnsense in a VM. Once I'm familiar with that I'm going to get a switch and access point from TP Link Omada (looking at the TP-SL2008P and EAP 650) and will run the Omada controller in a proxmox container. I will want to set up some VLANs and probably a WireGuard VPN. I'm sure I'll want to do other things once I've got it up and running, but first I need to get the box, make sure it's stable following the excellent information in this thread and elsewhere in the forum.
I thought I would update on my experience of the CWWK N100 4 LAN that I purchased back in May 2023 to act as a home router/firewall and general home lab. I followed the great advice in this thread regarding BIOS settings (my original post is on page 5 of 150!!) and also applied new thermal paste and a thermal pad between the copper plate and the case. I have been running Proxmox on it with an Opnsense VM, a Home Assistant VM, MQTT LXC, Zigbee2Mqtt LXC, MariaDB LXC (used for recording home assistant sensor data), a docker LXC running Frigate, TP Link Omada LXC and a Unifi controller LXC.

In terms of hardware, I started with 16GB of Ram and a 500GB NVME, but I have since upgraded to 32GB of Ram. I also added a Coral M.2 Accelerator A+E Key into the "wifi" M.2 slot for Frigate and a Sonoff USB Zigbee dongle. I am still using the power adapter supplied with the unit. I have also added a USB powered 120mm fan that just sits on top blowing air at the fins. The fan has 3 speeds and I run it at the lowest speed. It keeps the case nice and cool, without the fan the case would be warm to the touch.

For OPNSense I passed through 3 of the 4 NICs, the original idea was to have one WAN, then two for the LAN in a LAGG. I have since removed the LAGG and the spare NIC is now for a backup WAN using a 4G LTE modem (it was actually the primary WAN for a couple of months while I had some issues with my fibre connection). The LAN is vlan aware and I have several VLANs for various different things. The 4th NIC is used for the Proxmox virtual bridge that the VMs and LXCs use. It is also VLAN aware so I can specify the VLAN for each VM/LXC. In fact the Frigate LXC has two virtual NICs, one in the camera VLAN and the other in an NVR VLAN. I also run Wireguard on Opnsense to connect to my network remotely.

So what have I learned. The uptime is excellent, I've had maybe a couple of unexplained shutdowns in the two and a half years (I didn't find anything obvious in the logs). Checking the temps just now (running in its steady state, though Frigate, Unifi and Omada LXCs are shut down):

Code:
proxmox:~# sensors
coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +26.0°C  (high = +105.0°C, crit = +105.0°C)
Core 0:        +23.0°C  (high = +105.0°C, crit = +105.0°C)
Core 1:        +23.0°C  (high = +105.0°C, crit = +105.0°C)
Core 2:        +24.0°C  (high = +105.0°C, crit = +105.0°C)
Core 3:        +24.0°C  (high = +105.0°C, crit = +105.0°C)

acpitz-acpi-0
Adapter: ACPI interface
temp1:        +27.8°C 

nvme-pci-0100
Adapter: PCI adapter

Composite:    +41.9°C  (low  =  -0.1°C, high = +84.8°C)
                       (crit = +94.8°C)
Sensor 1:     +41.9°C  (low  = -273.1°C, high = +65261.8°C)
Sensor 2:     +53.9°C  (low  = -273.1°C, high = +65261.8°C)
Sensor 8:     +41.9°C  (low  = -273.1°C, high = +65261.8°C)
The only alarming thing I only discovered recently is that my SSD is wearing out fast:

Code:
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        42 Celsius
Available Spare:                    100%
Available Spare Threshold:          5%
Percentage Used:                    66%
Data Units Read:                    3,366,490 [1.72 TB]
Data Units Written:                 122,309,744 [62.6 TB]
Host Read Commands:                 39,195,222
Host Write Commands:                4,389,310,297
Controller Busy Time:               1,565
Power Cycles:                       44
Power On Hours:                     21,211
Unsafe Shutdowns:                   13
Media and Data Integrity Errors:    0
Error Information Log Entries:      146
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               42 Celsius
Temperature Sensor 2:               54 Celsius
Temperature Sensor 8:               42 Celsius
66% wear. Now, this is partly because this drive (a Crucial P3) has a very low TBW rating, but also I set up proxmox using ZFS and VMs doing any logging really increase the write activity. When I installed proxmox I turned off the cluster services that are often blamed for SSD wear on ZFS, but my VMs were logging to disk. I have taken several actions:
  • I changed journald config on all my LXCs and VMs to use volatile storage (eg log to RAM) with a max size of 32MB and not to forward to syslog (on some LXCs this was enabled by default). I know I'll lose the logs when rebooting, but that's ok for me.
  • I shut down the Omada and Unifi controllers that store statistics in a mongodb database. I never looked at the statistics anyway and I can easily fire up the LXCs if I need to make a config change to the network.
  • Set atime=off on the zfs pool
  • Set the recordsize=16K on the zfs volume/pool for MariaDB
This has significantly reduced the disk writes so the SSD should still last a while. The home assistant VM is still responsible for a lot of writes. Unfortunately I haven't been able to change the journald config there because the config is write protected in HAOS (the /etc/ mount is write protected). It would also have been good not to get an SSD with such a low TBW rating.

Since my SSD wear discovery I have been thinking about what would happen if this little box would die. Obviously it could be replaced but that could take quite a while and losing opnsense and all the vlans would cause quite a headache. Losing Home Assistant would be annoying, but everything should still be usable so that is less of an issue.

So that's a long story for why I'm really posting. I have ordered another box direct from CWWK. This time it will be an N150 and I think the motherboard has changed (should be CW-AL-4L-v2.0 now) to include some more USB ports. I also asked them to include the adapter board to split the NVME x4 into four NVME x1 slots. I'm not sure if I'll use it or not, but better to have it just in case. The plan for this box is again to run Proxmox on it and run a second Opnsense VM in HA mode using CARP and also a backup solution, probably PBS (Proxmox Backup Server). Its unlikely I will run the other containers here, unless the existing node dies and then I would restore them from PBS.

I still have the 16GB DDR5 stick that I started with, but I would like some advice on how to layout my storage. Options/questions in my mind are:
  • Get a single NVME drive, say 1-2TB. Put everything on there
  • Get two drives. One for the main Proxmox OS and any VMs/LXCs, and the second (using the second NVME x1 slot) exclusively for the PBS storage
  • Go big and use the adapter board to have up to 5 drives. Probably one for the main OS (not on the adapter board), then maybe a 2 drive mirror for PBS storage and then 1 or 2 drives for VM storage. I would worry about the stability of such a setup using the adapter board, specially if all 4 slots are filled up.
  • Then there is the question of whether to use ZFS again, or just stick to ext4. I don't know if I'll lose the snapshot functionality for example and how this would affect PBS.
  • For ZFS on Proxmox the recommendation is usually get used enterprise drives, but other than some Micron M.2 drives, that's not really an option in these Mini PCs. Regardless of ZFS or not, I will get SSDs with higher TBW ratings, probably Lexar NM790 or WD Red SN700.
  • Another option would be a 2.5inch SATA SSD or HDD, but I don't see the point.

Any thoughts on the choice of ZFS vs ext4 and the general drive layout would be greatly appreciated.
 
  • Like
Reactions: KevinR

TrevorH

Member
Oct 25, 2024
70
33
18
The 66% line is not about its lifespan, it's about how much of the SSD is allocated and in use. The lifespan situation is not quite as bad as you think it is, a Crucial P3 500GB has 110 TBW before its warranty expires and yours has used 62.9 which is ~57%. Still pretty high but not as bad as you thought it was.
 
  • Like
Reactions: KevinR