NEW! Topton 10Gb 2xSFP+ 4x2.5Gb i5-1240P

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
The "x8 seat (x4 signal)" refers to the physical slot being an x8 one, but with only the pins for 4 lanes + power actually connected. Basically it's a x4 slot that physically fits x8 cards.
It was almost perfect, but then they halved the number of lanes to the network card. Shame!

All said and done, this will still do the trick for me. With my up and downstream network being the limiting factor, the most I need to push through this box as a firewall is 2 x 3 = 6 Gbps. (I misspoke earlier when I mentioned I only needed one port, I do in fact need two). I also plan to benchmark Proxmox on it, because I'll likely get a second one to run as a hypervisor with a 10Gbps VLAN trunk, if the testing works out. Virtualization will no doubt drop the speed down a notch I'm sure.

Based on everything that came to light, if you need 20 Gbps or higher throughput, this isn't the device for you. Sounds like it will be locked to the sub-14 Gbps cap mentioned in Dual 10Gbit network using PCI 2.0 (5GT/s) x4 – what is the maximum bandwidth? | Any IT here? Help Me!. The theoretical PCIe 2.0 x4 limit is only 500 MB/s per lane = 2000 MB/s or 16 Gbps. Come on Topton. Do better.
 
  • Like
Reactions: Stovar

blunden

Active Member
Nov 29, 2019
500
161
43
It sounds like we are still a good year or 2 away from getting a mini router that can make full use in routing, nas and 10G duties, but just a guess.

Would still be interesting to see some iperf benchmarks still.

Do you think its better to get an micro-atx motherboard with few 16x and 4x Pci-e slots and add on Nic cards, maybe something like an intel Core T processor and just build a more powerful router?
I'd say we are fairly close already. If this wasn't limited by PCI-E lanes it could probably manage the routing part of it, at least with 1500 byte packets, as long as you don't do a bunch of traffic inspection/blocking (IDS/IPS) etc.

Building a small PC yourself is always an option, but you'll likely need active cooling and see a higher power consumption though.

I'm putting my hopes in my Qotom box based on the Atom C3758 being fast enough for my multi-gig routing needs later this year. I don't expect that I'll actually get full 10G internet though since I'm basically just allowed to use whatever uplink capacity is left over at any given time above my usual 1G, so I expect it to fluctuate quite a bit.

It was almost perfect, but then they halved the number of lanes to the network card. Shame!

All said and done, this will still do the trick for me. With my up and downstream network being the limiting factor, the most I need to push through this box as a firewall is 2 x 3 = 6 Gbps. (I misspoke earlier when I mentioned I only needed one port, I do in fact need two). I also plan to benchmark Proxmox on it, because I'll likely get a second one to run as a hypervisor with a 10Gbps VLAN trunk, if the testing works out. Virtualization will no doubt drop the speed down a notch I'm sure.

Based on everything that came to light, if you need 20 Gbps or higher throughput, this isn't the device for you. Sounds like it will be locked to the sub-14 Gbps cap mentioned in Dual 10Gbit network using PCI 2.0 (5GT/s) x4 – what is the maximum bandwidth? | Any IT here? Help Me!. The theoretical PCIe 2.0 x4 limit is only 500 MB/s per lane = 2000 MB/s or 16 Gbps. Come on Topton. Do better.
They probably wanted to catch the ESXi crowd as well, with VMware's limited hardware support. That meant using an Intel NIC, and you need relatively "new" NICs to get one with PCI-E 3.0 on the Intel side, compared to Mellanox etc. Otherwise they could've used those same Mellanox ConnectX-3 OCP cards that GoWin used in their initial R86S boxes. They are supposedly really solid and also widely supported in everything except recent ESXi versions.
 
Last edited:

flipper203

New Member
Feb 6, 2024
22
12
3
For the moment there is no box that can manage 2 10g sfp correctly ? Only a custom build can do that ?
 

blunden

Active Member
Nov 29, 2019
500
161
43
For the moment there is no box that can manage 2 10g sfp correctly ? Only a custom build can do that ?
Depends on what you mean by that? Routing or just not having PCI-E bottlenecked NICs?

The Qotom Q20332G9-S10/Q20331G9-S10 get pretty close, the GoWin R86S (only the ones with Mellanox NICs) should be fine too I think.
 

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
If you're just using it as a small server (or as a 2.5G router with a 10G connection to a NAS or something to improve multi-client NAS throughput) and therefore only need one of the ports, sure. :)

Most people are likely to run these as 10G routers though, which is also what they tend to be marketed as. Then they need both slots to operate at full speed, which it is far from doing at PCI-E 2.0. Therefore, this is still a design flaw when used with an Alder Lake-N platform where you can't afford to waste PCI-E bandwidth like that by running something at PCI-E 2.0 speeds.
See my reply above. I just assumed you'd use it as a router, which is what they market it as, and also what most people here presumably want to use it as. :)

The "x8 seat (x4 signal)" refers to the physical slot being an x8 one, but with only the pins for 4 lanes + power actually connected. Basically it's a x4 slot that physically fits x8 cards.
It still depends, is this x8 / x4 slot they mention actually a PCIe 3.0 or 4.0 slot? If so then you're fine, because they have enough bandwidth even at x4.
It's really only if it's PCIe 2.0 x4 that it's an issue
 

flipper203

New Member
Feb 6, 2024
22
12
3
ok but the MS-01 would maybe be overkill ! My goal is to have opnsense (With IPS / DPS) with Wan for the moment in 1Gb connection, but want to be future proof. 10Gb to drive a switch, with some equipements and a Proxmox server on a 10GB connexion.
The main objective is to have a computer that can manage later on a faster internet connexion like 5Gb. Any hint ?
 

blunden

Active Member
Nov 29, 2019
500
161
43
It still depends, is this x8 / x4 slot they mention actually a PCIe 3.0 or 4.0 slot? If so then you're fine, because they have enough bandwidth even at x4.
It's really only if it's PCIe 2.0 x4 that it's an issue
We were always talking about the built in Intel based NIC here, where the chip itself only supports PCI-E 2.0. That means any lanes you allocate it will provide at most PCI-E 2.0 bandwidth per lane, regardless of whether the CPU or motherboard chipset supports later PCI-E versions. The platform could provide 4 PCI-E 5.0 lanes and the NIC would still be bottlenecked.

If you are talking about the extra PCI-E slot, then sure you could put a different NIC using a more modern PCI-E generation and be totally fine. I don't know if you can actually access the ports on that NIC in this chassi though since I was under the impression that there is no cutout in the case for it.

If you were to replace the built in NIC, you'd probably find that the SFP+ ports don't line up with the holes in chassi since those are not standardized between different brands.

NOTE: My understanding is that this doesn't assign the full 8 lanes needed for the 10G NICs in all of the models. It certainly can't for the N305 since that only has 9 lanes in total and it needs lanes for the SSDs, the other NICs and any other I/O that isn't already built into the chipset. I don't know whether that applies to all models or whether they designed completely different motherboards for them with different lane counts. In that case, some of them might allocate the full 8 lanes needed.
 
Last edited:
  • Like
Reactions: Stovar

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
Interesting PCIe nuances being discussed here. The built in NICs on this box are intel i226-v, which are part of the main board. They should have PCI 3.1 interfaces according to Intel's spec page. The NVME drive says on the Topton spec page that it is PCI 3.0.
So the mainboard is likely PCIe 3.0/3.1 at a minimum.

Depending on which addon board you get for the extra ports, you either get more i226-v ports (PCIe 3.1) or SFP+ 82599 ports (PCIe 2.0).

So when you plug a PCIe 2.0x8 card into a PCI 3.0x4 slot - how much bandwidth does the card actually get? My gut is telling me that the add on card interface really sets the limit no matter the type of PCIe interface it is plugging into. So because there are only 4 signal lanes on the x8 slot, the add-on card can only get PCIe 2.0x4, capping bandwidth at 16 Gbps (500 * 4 * 8).

Ironically, if you go with the i226-v option, you theoretically get 31.52 Gbps for four 2.5 G ports. (985 * 4 * 8), giving you more networking bandwidth than with the SFP+ option.

It's also not known if the PCIe lanes are shared between the built-in/add-on ports, so who knows how much you really get. I suppose that's what we need benchmarks for o_O

It always bugs me that in their reviews, the STH guys don't try to fully max every port simultaneously (as well as individual ports) on one of these devices to see how well it actually performs, i.e. its max throughput. Seems like it would be a very helpful standardized test.
 

blunden

Active Member
Nov 29, 2019
500
161
43
So when you plug a PCIe 2.0x8 card into a PCI 3.0x4 slot - how much bandwidth does the card actually get? My gut is telling me that the add on card interface really sets the limit no matter the type of PCIe interface it is plugging into. So because there are only 4 signal lanes on the x8 slot, the add-on card can only get PCIe 2.0x4, capping bandwidth at 16 Gbps (500 * 4 * 8).

It always bugs me that in their reviews, the STH guys don't try to fully max every port simultaneously (as well as individual ports) on one of these devices to see how well it actually performs, i.e. its max throughput. Seems like it would be a very helpful standardized test.
Yes, the number of lanes of the card is the (physical) limit. Those lanes then run at whatever the lowest speed supported by the card or motherboard is.

Agreed.
 

adman_c

Active Member
Feb 14, 2016
271
144
43
Chicago
We were always talking about the built in Intel based NIC here, where the chip itself only supports PCI-E 2.0. That means any lanes you allocate it will provide at most PCI-E 2.0 bandwidth per lane, regardless of whether the CPU or motherboard chipset supports later PCI-E versions. The platform could provide 4 PCI-E 5.0 lanes and the NIC would still be bottlenecked.

If you are talking about the extra PCI-E slot, then sure you could put a different NIC using a more modern PCI-E generation and be totally fine. I don't know if you can actually access the ports on that NIC in this chassi though since I was under the impression that there is no cutout in the case for it.

If you were to replace the built in NIC, you'd probably find that the SFP+ ports don't line up with the holes in chassi since those are not standardized between different brands.

NOTE: My understanding is that this doesn't assign the full 8 lanes needed for the 10G NICs in all of the models. It certainly can't for the N305 since that only has 9 lanes in total and it needs lanes for the SSDs, the other NICs and any other I/O that isn't already built into the chipset. I don't know whether that applies to all models or whether they designed completely different motherboards for them with different lane counts. In that case, some of them might allocate the full 8 lanes needed.
It is too bad that the PCIE slot on these boards is limited to 4 lanes, since the CPUs they actually put in the machines (8508, U300E, and 1240P) all have 20 PCIE lanes, rather than the 9 in the N-series CPUs. I wonder if these boards were originally designed for N-series CPUs and they later switched it up. Maybe a future revision will fix this, as that would make these pretty incredible 10 gig routers.
 
  • Like
Reactions: Stovar

blunden

Active Member
Nov 29, 2019
500
161
43
It is too bad that the PCIE slot on these boards is limited to 4 lanes, since the CPUs they actually put in the machines (8508, U300E, and 1240P) all have 20 PCIE lanes, rather than the 9 in the N-series CPUs. I wonder if these boards were originally designed for N-series CPUs and they later switched it up. Maybe a future revision will fix this, as that would make these pretty incredible 10 gig routers.
Like I said, I'm not absolutely certain that they don't assign more lanes on those other models. It's possible that they do so after all. The N-series definitely won't have them though.

The best thing would probably be to ask the seller directly. I'd love to be proven wrong on this if it means people get a better product. :)
 

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
Well I am quite happy to post this update. My Topton box with the 1240P processor and 10G SFP+ ports arrived today.

TLDR: The device can push 20 Gbps (10 in and 10 out) with packet inspection enabled.

My setup: - two PCs, both with 82599 (X520) cards connected by fiber to the topton running pfSense 2.7.2 bare metal. The WAN interface is one of the 2.5 G ports.

The PCI speeds show for ix0 and ix1 as follows with pciconf:
Code:
> pciconf -lcv ix0
ix0@pci0:4:0:0: class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x10fb subvendor=0xffff subdevice=0xffff
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
                 max read 512
                 link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 d42000ffffb1d8b3
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
> pciconf -lcv ix1
ix1@pci0:4:0:1: class=0x020000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x10fb subvendor=0xffff subdevice=0xffff
    vendor     = 'Intel Corporation'
    device     = '82599ES 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 64 messages, enabled
                 Table in map 0x20[0x0], PBA in map 0x20[0x2000]
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) FLR RO NS
                 max read 512
                 link x4(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[e0] = VPD
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
    ecap 0003[140] = Serial 1 d42000ffffb1d8b3
    ecap 000e[150] = ARI 1
    ecap 0010[160] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 64 supported
                     First VF RID Offset 0x0180, VF RID Stride 0x0002
                     VF Device ID 0x10ed
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
So they both report x4 lanes of PCIe 2.0 bandwidth available to each.

iperf3 shows this directly to the firewall (I used 10 threads: -P 10):
Code:
[SUM]   0.00-10.00  sec  11.0 GBytes  9.43 Gbits/sec                  sender
[SUM]   0.00-10.00  sec  11.0 GBytes  9.43 Gbits/sec                  receiver
iperf3 shows this between PC1 and PC2 (routed through the firewall, and inspected by Suricata - Suricata is detecting things so I know its inspecting)
Code:
[SUM]   0.00-90.59  sec   100 GBytes  9.48 Gbits/sec                  sender
[SUM]   0.00-90.59  sec   100 GBytes  9.48 Gbits/sec                  receiver
And the pfSense Traffic graphs look like this:
1709313282251.png

I will include this in my eventual review on the Ali-Express page so that other folks will know. So that is that! Very happy with these results. On the weekend I will try virtualizing pfSense on top of Proxmox and see how that goes.

P.S. I used these SFP+ modules coded for intel and they just worked with 10Gtek Fiber Patch Cables (LC to LC OM3)
 

blunden

Active Member
Nov 29, 2019
500
161
43
Well I am quite happy to post this update. My Topton box with the 1240P processor and 10G SFP+ ports arrived today.

TLDR: The device can push 20 Gbps (10 in and 10 out) with packet inspection enabled.
Awesome! So they properly designed the motherboard to make use of the extra lanes those CPUs have over the Alder Lake-N models. I'm very happy to be wrong on that point. :D
 
  • Like
Reactions: thepsyborg

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
On the weekend I will try virtualizing pfSense on top of Proxmox and see how that goes.
With a virtualized copy of pfSense 2.7.2 running in proxmox 8.1.4, the performance testing was:
  • Virtualized nics (virtio) linked to the 10G ports were all over the map. The best I could do was around 8 Gbps, but it was inconsistent as heck. Always above 6 Gbps though. I only had one other VM running on the hypervisor, which was just a copy of Ubuntu doing nothing. Depending on your use case, the bandwidth might be enough.
  • Pass through: The device supports IOMMU passthrough. You do have to pass the full SFP+ PCIe card through (not possible to only pass through the 10G ports individually). In this configuration, I got 9.45 Gbps consistently from one port to the other, with Suricata inspection. You are also able to pass through each of the i226-v 2.5G ports individually, so that is flexible. So the virtual pfSense instance is up to the task with raw access to the NICs.
 
  • Like
Reactions: Stovar

PigLover

Moderator
Jan 26, 2011
3,188
1,548
113
With a virtualized copy of pfSense 2.7.2 running in proxmox 8.1.4, the performance testing was:
  • Virtualized nics (virtio) linked to the 10G ports were all over the map. The best I could do was around 8 Gbps, but it was inconsistent as heck. Always above 6 Gbps though. I only had one other VM running on the hypervisor, which was just a copy of Ubuntu doing nothing. Depending on your use case, the bandwidth might be enough.
  • Pass through: The device supports IOMMU passthrough. You do have to pass the full SFP+ PCIe card through (not possible to only pass through the 10G ports individually). In this configuration, I got 9.45 Gbps consistently from one port to the other, with Suricata inspection. You are also able to pass through each of the i226-v 2.5G ports individually, so that is flexible. So the virtual pfSense instance is up to the task with raw access to the NICs.
Thats not a really surprising result. Virtio has a LOT of overhead and does a lot of calls that cross the kernel/userspace boundary (read: slow). Even though recent optimizations have made it fairly stable and reasonably capable of sustaining 1+ gbps speeds it still really breaks down on lower-end modern processers when trying to do >2.5gbps. It gets even worse if your traffic mix scews towards small packets, e.g., in VOIP networks. Even with a more capable CPU that might handle virtio at 10gbps it really is a waste of CPU cycles (and power/heat) to do so.

For 10gps you really need to do passthrough, either of the PCIe card as you describe here or set up SR-IOV and pass through VFs off of the card. The SR-IOV approach lets you pass through PVs to multiple VMs using the same NIC, including option of configuring a VF on the Hypervisor host so that you can share the same NIC with your router instance (pfSense) and the host. The NICs in this box should handle SR-IOV without problems unless they crippled the BIOS. It just takes a bit of configuration.
 
Last edited:
  • Like
Reactions: EncryptedUsername

EncryptedUsername

New Member
Feb 1, 2024
15
18
3
Sadly, it appears that pfSense (aka FreeBSD) does not support SRV-IO VF interfaces. They fail to show up in the guest OS.
More than a few posts out there on it. I found one that claimed adding a line to the /boot/loader.conf.local file ( hw.pci.honor_msi_blacklist=0 ) would magically make it work, but it seems to no longer work in FreeBSD 14/pfSense 2.7.2. At least not with my hardware.