NEW! Topton 10Gb 2xSFP+ 4x2.5Gb i5-1240P

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
The cards even though they Intel actually not that compatible... Ran into problems with pfSense not wanting to read card/Dell/EMC combination. ended getting a DAC. worked.
On a 2nd Topton, same... ended getting a SFP+ from a local supplier, worked perfectly. So I'm using the Dell/EMC transceivers I got on my Unifi switch side and having to get Transceivers for the mini Pc side.
G
 
  • Like
Reactions: Stovar

athurdent

Member
Jul 6, 2023
57
53
18
Got a U300E yesterday. May need to send it back. It's not capable of sustaining 10G iperf3 traffic. CPU always slows down significantly after a few seconds.
Starts fine:
Screenshot 2024-11-12 at 14.07.21.png

And shortly afterwards degrades:
Screenshot 2024-11-12 at 14.07.35.png

HUNSN support recommended to turn off Cstates, which did not help.
I tried like every BIOS option I could think of being related. Set the fan to full speed and put a high speed fan outside. Made sure Proxmox was unsing the correct governor (performance).
Tried pure Debian Bookworm & tried Proxmox host with the 10G adapter directly. But it all looks like above with the 10G passed thorugh to a VM.
Guess the CPU was a bad choice?
 

Muetze

New Member
Dec 9, 2024
1
1
3
Hi,

TLDR: The device can push 20 Gbps (10 in and 10 out) with packet inspection enabled.
This is not entirely true! It should be like, the device can push 10G simplex, not duplex!

link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
I've also marked it visually, quote from your post: cwwk-quote.png

As you can see here, the device link is capable of x8 but was downgraded to x4, (x8) is indicating this. I also verified that on my linux machine

cwwk2.png

And the pfSense Traffic graphs look like this:
1709313282251.png
As mentioned, you're pushing 10G in one direction/simplex, however duplex (10G up,10G down) at the same time is not possible. To prove this, I started as client and server iperf3 on two machines, in different networks, so the packets must be routed.

1) I start the client process on PC1 and ~10G are getting full utilized (simplex)
2) After 10 seconds, I start the client process on PC2. We now have full duplex traffic and the rates are dropping to something like 6.25G on both tests
3) The test on PC1 will be now terminated, so we have simplex traffic from PC2, rates increasing again to ~10G.

cwwk.png
 
  • Like
Reactions: Stovar

bugacha

Active Member
Sep 21, 2024
395
107
43
Ive had i5-1240 model with 2 SFP+ for about year and 2 months and it's been working flawlessly.

Sadly it stopped power on after about 1 month of being switched off.

I already checked and tried different PSU, no luck
 
Last edited:

EncryptedUsername

New Member
Feb 1, 2024
22
21
3
This is not entirely true! It should be like, the device can push 10G simplex, not duplex!
Thanks for the clarification, my wording was perhaps not the clearest. My goal was to achieve the full 10 Gbps through the device from NIC to NIC, which is met. You are correct a transfer in both directions simultaneously does not achieve 10 gbps on both transfers, the PCI bus is the limit.
 

EncryptedUsername

New Member
Feb 1, 2024
22
21
3
Question for any owners of this toptop box that are using it for Proxmox. It's been working quite will for me with 5 guests for a year now. I've only recently been exploring proxmox's snapshotting, backup and restore features. I am finding that when I perform snapshots, backups, and restores, all of my guests have intermittent service outages while the operations are ongoing. I suppose this problem has existed all along, I just didn't know. I've been reading the Proxmox forums, and not had much luck finding an answer. Most posts point to I/O limitations on the SSD. So ....

My nvme drive seems to be running at PCI 3.0 x4, which should yield 3.938 GB/s of bandwidth, which is pretty much full speed for my drive 's specs:

Code:
>nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            TPBF2401170060203315 TEAM TM8FP6002T                          1           2.05  TB /   2.05  TB    512   B +  0 B   VC2S0390

> cat /sys/class/nvme/nvme0/device/current_link_speed
8.0 GT/s PCIe

>lspci | grep -i nvme
01:00.0 Non-Volatile memory controller: Realtek Semiconductor Co., Ltd. RTS5765DL NVMe SSD Controller (DRAM-less) (rev 01)

>lspci -s 01:00.0 -vv | grep -i "LnkSta\|LnkCap"
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                LnkSta: Speed 8GT/s, Width x4
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
Within Proxmox, the bandwidth on this drive is terrible though. I have tried many variations of tests, with numjobs, iodepth, it does not help. It should be in the GiB/s range, not MiB/s.

Code:
READ: bw=91.9MiB/s (96.4MB/s), 22.3MiB/s-23.9MiB/s (23.4MB/s-25.1MB/s), io=5514MiB (5782MB), run=60005-60009msec
Some posters are saying that by switching the drive controller from AHCI to RAID in the BIOS unlocks the performance, but I haven't tried that yet - I'm not even sure that's an option on this BIOS. It requires a reboot, and I am also worried about data loss in creating a fake raid. I have no idea how to back up the proxmox install itself, other than a full drive image using disk imaging software. [EDIT- The only option in the BIOS is AHCI] It's also worth nothing I am not getting any thermal warnings from the drive based on the output of:
Code:
>smartctl -a /dev/nvme0
>nvme smart-log /dev/nvme0
So getting to my actual question: What are the other owners getting for their benchmarks in Proxmox? The command I am using is below. Your "filename" name may differ. You can check that with lsblk.
Code:
> fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=4 --iodepth=64 --runtime=60 --time_based --name seq_read --filename=/dev/nvme0n1
You may need to install fio:
Code:
>apt install fio
It would be helpful to determine if I have just done something wrong somehow in my build, or if this is a limitation of the box. Your benchmarks would be appreciated.
 
Last edited:

EncryptedUsername

New Member
Feb 1, 2024
22
21
3
After much debugging, and swapping of drives, I have determined that Teamgroup MP33 drives are bad performers in this system. I had two of them, they both benchmarked at sub-200 MB/s. After replacement, I am now getting 3500 MB/s on the SK Hynix Gold P31 that I put in to replace. Who knew.

On a separate note: I find the thermals in the case to be dangerously hot for my use cases, so I 3d printed a new case, that has a push/pull dual 140 mm flow through fan setup. It's bigger, and louder, but that doesn't matter to me. I also outfitted the SSD with a be quiet MC1 Pro SSD cooler. Be aware this case may bake your NVME drives.
 

bugacha

Active Member
Sep 21, 2024
395
107
43
Ive had i5-1240 model with 2 SFP+ for about year and 2 months and it's been working flawlessly.

Sadly it stopped power on after about 1 month of being switched off.

I already checked and tried different PSU, no luck

My Topton works again after removing original CMOS battery. It was the reason why it didnt power up. No idea how and why, but it's working now with a new battery.
 
  • Like
Reactions: Stovar

casulo

Member
Nov 30, 2022
62
22
8
CMOS batteries and power supplies are almost always crap on these chinese mini pcs. Those are 2 things i always swap.
 
  • Like
Reactions: Stovar