I might have missed it if it wirren on one of these 40 pages.However, I'm not sure if their new cooling solution is an upgrade or a downgrade.
Where can I find information about this cooling solution in comparison?
I might have missed it if it wirren on one of these 40 pages.However, I'm not sure if their new cooling solution is an upgrade or a downgrade.
I might have missed it if it wirren on one of these 40 pages.
Where can I find information about this cooling solution in comparison?
...Why did this have to get so complicated? I do so love multiple revisions of the same product with an unclear changelog. I always get turned around buying from vendors on AliExpress.If you want the latest revision, Qotom sells Q20331G9 V1.2 C3758R on their AliExpress store.
However, I'm not sure if their new cooling solution is an upgrade or a downgrade.
I will be able to tell, once the units I bought a few hours ago arrive.If you want the latest revision, Qotom sells Q20331G9 V1.2 C3758R on their AliExpress store.
However, I'm not sure if their new cooling solution is an upgrade or a downgrade.

i bought it directly on the qotom website, not aliexpress. Also, it's the C3758R in my case.Bizarre. I'm showing a price of over $530 for one C3758 unit.
And it's going to put ~$50 in tariffs on top of that.
View attachment 44773
Thanks!i bought it directly on the qotom website, not aliexpress.
One's getting colocated in a datacenter, the passively cooled one stays at home, but I'll be testing them both at home first. I was going to play around with MLVPN on FreeBSD, to bond 5G, cable and landline connections.Thanks!
Are you testing the desktop vs. rackmount, or deploying both of them?
Very cool. Now that I know I can buy directly from them at good prices, I'm seriously considering the desktop/passively cooled one. That's ... probably good enough for bare metal OPNSense. If I got that one, I'd be strongly tempted to stash it in the closet and finally run fiber from the office closet to the network rack in my office.One's getting colocated in a datacenter, the passively cooled one stays at home, but I'll be testing them both at home first. I was going to play around with MLVPN on FreeBSD, to bond 5G, cable and landline connections.
I've had my experiences with servers in my living room. It wasn't pleasant, to say the least.Very cool. Now that I know I can buy directly from them at good prices, I'm seriously considering the desktop/passively cooled one. That's ... probably good enough for bare metal OPNSense. If I got that one, I'd be strongly tempted to stash it in the closet and finally run fiber from the office closet to the network rack in my office.
OTOH, it would be nice to finally have my core network devices all in the same rack.
Decisions, decisions.![]()
I'm about sold on trying the fanless one.<deletia>
My requirements were the following:
Sub 50 Watts for the Datacenter (because it's dirt cheap at that usage, 29$ for 10TB traffic/month, 1gbit in 1U @ 50VA - plus extras, if you need them)
Passively cooled for the home
If possible, the same design to keep maintenance low
Good support for FreeBSD
10gbit
The N150/N355/laptop-based models would probably satisfy most of those, but Denverton Atoms got the X553 built into the cpu, while X710/other ethernet controllers typically need some form of active cooling.
I doubt, that I'll be capable of stressing the device, to be honest. Hardware acceleration is doing the heavy lifting. I'm not sure, if i'd use it for virtualization workloads or as a file storage with internal drives. passive cooled temperatures with closed enclosures are typically hostile for drives. The rest should be fine. I will be running network functions on it though.I'm about sold on trying the fanless one.
Do you have any real performance concerns for putting the fanless unit in a home/home office?
Awesome. I'd definitely just be using it for baremetal OPNSense.I doubt, that I'll be capable of stressing the device, to be honest. Hardware acceleration is doing the heavy lifting. I'm not sure, if i'd use it for virtualization workloads or as a file storage with internal drives. passive cooled temperatures with closed enclosures are typically hostile for drives. The rest should be fine. I will be running network functions on it though.
I'd say OPNSense as a virtual appliance is fairly standard now, I'd be surprised if there are still issues. When I talk about virtualized workloads, then I'm thinking about high-load virtualized services like transcoding media, build-bots/ci, or even VPNs - if it didn't have Intel QuickAssist.Awesome. I'd definitely just be using it for baremetal OPNSense.
I'm not opposed to virtualizing OPNSense, but it's a whole vector of potential problems that I don't want in the firewall that I use to work from home.
Honestly, I'd love to try this at some point, but I can't afford to take my entire home/work offline while I get it working. Maybe in the future.I'd say OPNSense as a virtual appliance is fairly standard now, I'd be surprised if there are still issues. When I talk about virtualized workloads, then I'm thinking about high-load virtualized services like transcoding media, build-bots/ci, or even VPNs - if it didn't have Intel QuickAssist.
Since the one that goes into the datacenter is actively cooled, that one will be an off-site backup as well with 1-2 large SATA drives.
I'm typically running a virtualized network, with DNS/DHCP/CAPWAP controller/... that are cheap to run in jails. I'm still thinking about running OpenWRT as an AP on BHyve with a wifi card passthrough. I'm very sceptic about this though, since we're talking about passively cooling a wifi AP card in a closed enclosure. Also, a BananaPi R3 mini costs like 130$, and an OpenWRT One is even cheaper (but larger/uglier). Many choices, and not enough answers :<
if you use a direct attached storage with that SFF8087 port, that shouldn't be an issue. I'm just sceptical of a drive's lifetime, if you put it inside the passively cooled enclosure without ventilation. Careful with compression though, that can increase the load significantly. Should be fine to run some cheap compression algorithm. Any reason, why you want to use SSDs? HDDs should be a lot cheaper and for backups, you probably wouldn't care tooo much about IOps.Honestly, I'd love to try this at some point, but I can't afford to take my entire home/work offline while I get it working. Maybe in the future.
If I could run my firewall/DNS/VPN and Proxmox Backup Server on the same box with a couple of SATA SSDs, that'd be ideal.
I'm a bit surprised that the C3758R is enough for all that. The way some people write about it, it's getting a bit too dated.
I've already got an 8-bay HDD NAS 4x ZFS mirrors (RAID1), with ~52 TiB of space, with a 2x10GbE link.Any reason, why you want to use SSDs? HDDs should be a lot cheaper and for backups, you probably wouldn't care tooo much about IOps.
In that case, let me compare both the N6005 and the C3758R for you: from the pure core-per-core perspective, the C3758R should be ~half as powerful, but it's got twice the cores. What really moves the needle is hardware acceleration. Intel QuickAssist gives acceleration for encryption and compression tasks, so in these workload categories, the cores punch way above their weight compared to the N6005. I don't know that much about Proxmox, but from a pure backup perspective, you typically have some write cache in front of HDDs, and with a 7200RPM HDD, you can write with like 300-400+ MB/s to a ZRAID10. That's why I was surprised.I've already got an 8-bay HDD NAS 4x ZFS mirrors (RAID1), with ~52 TiB of space, with a 2x10GbE link.
My current PBS server is an Intel N6005 machine with 64 GiB of RAM and a 2x2.5 GbE connection, with 2x1.92 TB enterprise SATA SSDs.
If i moved Proxmox Backup Server onto this machine, I'd still want it backing up on to those SSDs, for max speed since it's suspending and backing up running VM and LXC containers. (Due to how PBS works, including deduplication and breaking backups into chunks, backing up directly to an HDD array is not recommended.)
That Promxox Backup Server then sends a copy of those backups over to the HDD array on the NAS, with no performance impact on the VMs since it's a backup-of-a-backup.
That is indeed instructive. Wow. I knew the C3758R was old, but The N6005 isn't exactly new, either. These sorts of comparisons are a lot more useful to me than raw benchmarks. I just don't look at enough of those to be able to absorb them well.In that case, let me compare both the N6005 and the C3758R for you: from the pure core-per-core perspective, the C3758R should be ~half as powerful, but it's got twice the cores. What really moves the needle is hardware acceleration. Intel QuickAssist gives acceleration for encryption and compression tasks, so in these workload categories, the cores punch way above their weight compared to the N6005.
Proxmox Backup Server supports encrypting backups as they're made, and primarily works by breaking data on a filesystem into chunks and storing those--which allows for deduplication without using ZFS deduplication, along with other performance enhancements versus just using a file-based backup.I don't know that much about Proxmox, but from a pure backup perspective, you typically have some write cache in front of HDDs, and with a 7200RPM HDD, you can write with like 300-400+ MB/s to a ZRAID10. That's why I was surprised.
It's a 10nm part at least, But really, QuickAssist pushes the C3758R's performance to about ~20-25% higher than the N6005 in encryption and compression.That is indeed instructive. Wow. I knew the C3758R was old, but The N6005 isn't exactly new, either. These sorts of comparisons are a lot more useful to me than raw benchmarks. I just don't look at enough of those to be able to absorb them well.