Qotom Denverton fanless system with 4 SFP+

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

VivienM

New Member
Jul 7, 2024
22
6
3
Toronto, ON
I currently use IGC0 as LAN and IGC1 as VLAN/WAN. Will these 2 interfaces be the same with the Qotom system I225/226 interfaces? Hoping to just copy over my existing config from OPNSense.
On OPNsense, the RJ45 copper ports are igc0-4 and the SFP+ are ix0-3.
 

farmerj

New Member
Oct 24, 2023
14
5
3
This is what in one of my plan for that SYS_FAN2 header on the CPU side, but I don't think it will fit. For the 1U unit, losing the PWM function isn't a big problem, so that's why I plan on the simpler USB or direct 12v solution as well. I don't know what I will do until I have the unit. But you could mount the FAN directly on the side between side cage and PSU using zip tie, at least that's my plan. The problem may be getting the air flow pass the SFP wall. Meaning if there is no HDD involve, one 40mm fan could be enough (mounting closer to the rear to avaid the SFP wall). But then again adding two 40mm fan could help cooling the SFP if using BASE-T SFP(which I don't plan on using).
Hi, "mount the FAN directly on the side between side cage and PSU " <-- did this, didn't seem to help... Used the included foam and mounting tape for baffles.
 

koifish59

Member
Sep 30, 2020
86
24
8
Anyone also crazy enough to do this? It took some mods but it fits:

20241226_220429.jpg

That's a 18tb 3.5" drive, but it's not detected. That slot is meant for 2.5" drives, so I think only 5v power is provided from motherboard. I can give it the 12v from the PSU, but any ideas how to give it both 12v and 5v?
 

Arjestin

Member
Feb 26, 2024
33
5
8
Anyone also crazy enough to do this? It took some mods but it fits:

View attachment 40935

That's a 18tb 3.5" drive, but it's not detected. That slot is meant for 2.5" drives, so I think only 5v power is provided from motherboard. I can give it the 12v from the PSU, but any ideas how to give it both 12v and 5v?
Here is the pinout diagram from the user manual:

1735324024934.png
 
  • Like
Reactions: koifish59

crusader998

New Member
Dec 27, 2024
1
0
1
Has anyone been able to get 'decent' full-duplex throughput on the 10G (SFP+) NICs on these?
I also am seeing the same behaviour running opnsense and 10G SFP's.

It seems there is some sort of weird receive issue.E.g it seems that the qotom is able to push 10G outbound from itself no probs with a single iperf stream.

However reverse significantly worse for single stream. with multi stream I am able to hit ~7Gbits

From my PC to qotom on 10g network (using bidir with iperf3)

Code:
[  7][RX-C]   4.00-5.00   sec  1.09 GBytes  9.33 Gbits/sec                
[  5][TX-C]   5.00-6.00   sec   119 MBytes  1.00 Gbits/sec    0   1.12 MBytes
Code:
ix0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
    description: LAN (lan)
    options=4e03f2b<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG>
    media: Ethernet autoselect (10Gbase-SR <full-duplex>)
    status: active
    nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
When qotom is receiving i see netisr hit 100% CPU (only hit around 1Gbit/s)

Code:
[  5]   7.00-8.00   sec   132 MBytes  1.10 Gbits/sec    0   1.30 MBytes
Code:
ast pid: 88544;  load averages:  0.73,  0.55,  0.49                                                                             up 2+23:41:35  15:27:10
504 threads:   11 running, 418 sleeping, 75 waiting
CPU:  1.2% user,  0.0% nice, 10.8% system, 12.5% interrupt, 75.5% idle
Mem: 104M Active, 2933M Inact, 2490M Wired, 56K Buf, 26G Free
ARC: 1214M Total, 803M MFU, 226M MRU, 5865K Anon, 23M Header, 153M Other
     915M Compressed, 2231M Uncompressed, 2.44:1 Ratio
Swap: 16G Total, 16G Free

  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   12 root        -56    -     0B  1088K CPU2     2  16:37 100.00% intr{swi1: netisr 2}
When I use the -R option on iperf (qotom sending to my desktop) - I can hit 9.4Gbit/s
Code:
[  5]   9.00-10.00  sec  1.09 GBytes  9.34 Gbits/sec
Code:
last pid: 92314;  load averages:  0.63,  0.54,  0.49                                                                             up 2+23:42:11  15:27:46
504 threads:   9 running, 419 sleeping, 76 waiting
CPU:  0.1% user,  0.0% nice,  9.5% system,  6.3% interrupt, 84.0% idle
Mem: 104M Active, 2933M Inact, 2487M Wired, 56K Buf, 26G Free
ARC: 1215M Total, 803M MFU, 232M MRU, 425K Anon, 23M Header, 153M Other
     916M Compressed, 2232M Uncompressed, 2.44:1 Ratio
Swap: 16G Total, 16G Free

  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   11 root        187 ki31     0B   128K CPU1     1  70.5H  92.82% idle{idle: cpu1}
   11 root        187 ki31     0B   128K CPU3     3  70.6H  92.04% idle{idle: cpu3}
   11 root        187 ki31     0B   128K CPU2     2  70.5H  91.56% idle{idle: cpu2}
   11 root        187 ki31     0B   128K CPU4     4  70.4H  89.28% idle{idle: cpu4}
   11 root        187 ki31     0B   128K CPU7     7  70.5H  88.73% idle{idle: cpu7}
   11 root        187 ki31     0B   128K RUN      6  70.5H  86.72% idle{idle: cpu6}
   11 root        187 ki31     0B   128K RUN      5  70.5H  85.06% idle{idle: cpu5}
90746 root         68    0    19M  7856K sbwait   5   0:05  63.03% iperf3{iperf3}
   11 root        187 ki31     0B   128K CPU0     0  69.7H  52.48% idle{idle: cpu0}
   12 root        -60    -     0B  1088K WAIT     0  15:40  47.11% intr{swi1: netisr 0}
To me it seems like there is an issue with offload / interrupts when receiving.
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
My 1U 3758R just got delivered. On a quick test with the SODIMM I have lying around on various equipment which I borrowed from. Here is what the result are.
1735363524421.png
Considering the low amount of sample pool, the conclusion I can narrow is that the Double sided RAM are more likely to work. The Crucial/Micron RAM that fail is working fine on my other equipment(retested after failed in Qotom).
With the test as a baseline, I'm more willing to test this pair of TimeTec ECC RAM which appear to be based on SK Hynix chips. I'll find out in a few days if this pair of 2x16GB sticks will work.

PS: if somebody feel like starting a Google spreadsheet on this, you are welcome to take this data. Considering all the equipment STH are running the test, maybe somebody from STH should start it. Just one file, and each tab is for a series of equipment that shared the same board design.
 

Arjestin

Member
Feb 26, 2024
33
5
8
I noticed on the manual that 2 of the SFP ports are + (10 GB) and 2 are not+ (1GB). Is this correct?
No. That's a typo. All 4 SFP+ ports from C3758R SoC are 10GbE Intel X553.
On Qotom's website you can find a BIOS flashing utility, which allows you to config 2 of these ports to 2.5GbE.
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
Well, it's not a typo. The manual is specifically for Q20321G9 which use C3558 CPU. Both C3558/C3558R unit only has two 10G SFP+ ports and two 1G SFP which support up to 2.5G. All of these are coming directly from CPU support. [link]
1735398996200.png
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
Just wanna try and see what is the actual total bandwidth look like, so I set this up in proxmox.
I also setup another 8 bridges where each bridge only has one interface, excluding the one I login. Then I setup 8 LXC each is a debian 12 built with only one iperf3 installed. Then I make a directly for each port to another. eg: ix0 to ix1, ix2 to ix3, igc0 to igc1 and igc2 to igc3 (or equivalent of). And here is a result.
Untitled.jpg
The result as you can see, the total throughput probably a little over 10gbit/s
I have tried setting auto-negotiation to manual, result is pretty much the same. But the CPU utilization is pretty high, about 80%+.
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
The system is C3758R based with one 8GB stick of non ECC RAM and a stick of NVMe gen 3. Two of the SFP+ I have DAC cable and the other two has SFP+ LR.
Total power usage with idle CPU and all 9 ports occupied (no traffic): 28watts
Total power usage with all 9 ports occupied and full traffic (previous test): 38watts
Total power usage with max CPU (sysbench) and all 9 ports occupied (no traffic): 34watts

I suppose this could go above 50watts if you have dual ECC RAM sticks, NVMe gen 4, a spinning HDD and all SFP+ are optical. The PSU on this only go up to 50w, so this could be a problem.
 

blunden

Well-Known Member
Nov 29, 2019
875
292
63
Just wanna try and see what is the actual total bandwidth look like, so I set this up in proxmox.
I also setup another 8 bridges where each bridge only has one interface, excluding the one I login. Then I setup 8 LXC each is a debian 12 built with only one iperf3 installed. Then I make a directly for each port to another. eg: ix0 to ix1, ix2 to ix3, igc0 to igc1 and igc2 to igc3 (or equivalent of). And here is a result.
View attachment 40973
The result as you can see, the total throughput probably a little over 10gbit/s
I have tried setting auto-negotiation to manual, result is pretty much the same. But the CPU utilization is pretty high, about 80%+.
If I read my iperf3 bidirectional results correctly, I get roughly 15 Gbit/s in a quick test to a speedtest server on the internet (with multiple streams) so it seems like there might be some overhead in your setup if the idea is to test total throughput. :)

This was with a 10 Gbit/s computer as the client to let the C3758 (non-R) based Qotom box act solely as a router. A local server would obviously also be a more accurate test, but I don't feel like messing with it too much now that it's my main router. :)
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
Good news to report, the TimeTec ECC RAM 32GB (2x16GB) I ordered arrived. For under $80 ECC RAM, I guess it's alright. Going through Memtest86 right now, and just passed the first round. The RAM training before BIOS show up is about 2 mins, well it's more than a minute, but not more than two. Didn't exactly measure the time. So, if anyone want two stick of 16GB ECC, this should be good. Price is just a bit more than a stick of Crucial 16GB ECC. Of course if you want to leave yourself an option to go 64GB and only want to use one stick of 32GB, then going Crucial/Micron maybe better.
DIMM spec1.jpgDIMM spec2.jpgDIMM spec3.jpgDIMM spec4.jpgMemtest86 1.jpgMemtest86 2.jpg
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
So, here is the result of tests from all the RAM i process.
1735519476412.png
There is a datasheet for this SK Hynix HMA82GS7DJR8N-VK if you use your Google-Fu.
 
  • Like
Reactions: blunden

JDE1000

New Member
Dec 23, 2024
6
0
1
I use OPNSense. I want to change my LAN interface from IGC0 to IX0. How can I do this without losing access to web interface? I have both ports connected to the same switch. If I just change the assignment in SSH shell and then disconnect the IGC0 connection will that work? Really worried about losing web access to interface.

Thank you
 

VivienM

New Member
Jul 7, 2024
22
6
3
Toronto, ON
I use OPNSense. I want to change my LAN interface from IGC0 to IX0. How can I do this without losing access to web interface? I have both ports connected to the same switch. If I just change the assignment in SSH shell and then disconnect the IGC0 connection will that work? Really worried about losing web access to interface.

Thank you
Why not do it from the local console? There's a CLI option right when you log in to change the interface assignments...

Realistically, I suspect you could do it in the web interface and it would just work fine, but like you, I'd like to have a backup plan first.
 

TechUnsupport

Member
Sep 29, 2024
32
11
8
I use OPNSense. I want to change my LAN interface from IGC0 to IX0. How can I do this without losing access to web interface? I have both ports connected to the same switch. If I just change the assignment in SSH shell and then disconnect the IGC0 connection will that work? Really worried about losing web access to interface.

Thank you
I think you can do it in bridge. Make a bridge and assign ip to the bridge. You can have one ip on the physical onterface and diff ip on the bridge.

Infact it is better to use bridge instead of actual physical interface. It is more flexible in the future when making changes
 

sko

Active Member
Jun 11, 2021
379
234
43
I think you can do it in bridge. Make a bridge and assign ip to the bridge. You can have one ip on the physical onterface and diff ip on the bridge.

Infact it is better to use bridge instead of actual physical interface. It is more flexible in the future when making changes
NO, don't do that! That's not what bridges are for - you will create a loop and the switch will most likely disable both interfaces.

Just use a lagg interface configured as failover, use the ix interface as primary and igc as fallback.
 

JDE1000

New Member
Dec 23, 2024
6
0
1
NO, don't do that! That's not what bridges are for - you will create a loop and the switch will most likely disable both interfaces.

Just use a lagg interface configured as failover, use the ix interface as primary and igc as fallback.
This is interesting. This is an unmanaged switch. Will that work?