Hmmm... Recently been looking at finally putting together a low-power Docker cluster for home. Thinking ~3 of these these might be a good setup. Certainly better than any ARM solution I can find, and should perform better than the Intel J4105/5005 setup I'd been considering (not that such systems can currently be found anyways). Anyone using them in such a way or similar (e.g., Proxmox + LXC containers)?
I have two running Proxmox and LXC + Docker. I know it's redundant but the LXC ecosystem isn't really there compared to Docker, so I end up using LXC for interactive containers and Docker to just pull containers and run them.
Any reports on just how effective these are? Full IPMI would be preferred, of course, but at ~$100/ea. I can certainly live with less. From what I gather after a quick search is that DASH has some basic web functionality for power cycling and such. Correct? Does the serial console offers full BIOS config access? I'd rather not have to hook these up to a keyboard+display for setup and maintenance if possible.
I've been considering putting together some kind RPi serial console server anyways...
I looked into this a couple of months ago and didn't make a lot of progress, DASH requires a Windows desktop to run the client as far as I can tell and I basically live on my Chromebook these days so I need to make a Windows VM and control it remotely to test and ... I just haven't had time. I have both of mine hooked up to a standard USB/4kHDMI KVM under my desk so I haven't done much with serial control either.
Side note, if you end up putting together an RPi serial console server let me know how it goes, I've been wanting to do the same thing.
Also interested in this, if anyone has some numbers. I may use one as a VPN box or for pfSesnse.
...
Any more info on 10 Gb NICs in this box? I'd like to go Intel, if only for wide OS compatibility (IIRC Mellanox support isn't quite there under the BSD-based systems)?
Also, in general how good at actually utilizing a >1 Gb NIC are these? General routing performance numbers?
Thanks all.
I can definitely answer these. The first thing I did with mine was run iperf over the 10Gb link, and saturated it, so no worries there. If you can point me at a simple recipe to test OpenVPN or OpenSSL performance between machines I'd be happy to give you those numbers too. With the Radeon.bapm driver running you'll get boost clocks up to 3.6Ghz so performance should be "Pretty Good"(tm).
Mellanox support for CX2/CX3 seems fine to me under *BSD, I think I might've had to build the mlx4.ko module manually for pfSense and ssh it over to the machine when I tested last year but it was functional beyond that IIRC. If you're going to run bare metal pfSense with a Mellanox NIC I'd plug one of the onboard NICs into your management network so you can remote in after upgrades if new drivers are required. You might not even need to build the module on pfSense 2.4.x, I haven't tested since 2.3.
The only real gotcha I've had with NICs in these are boot issues. I had an Intel X520 in one and it caused all sorts of problems cold booting. Once the machine was up and running everything worked perfectly, but sometimes it'd take 3-4 attempts to get to that point. I'm having the same issue with a 40GB CX3 right now, I'm hoping to sort that out this week so I can switch my main router over to pfSense on Proxmox on a DT122.
I've had zero issues with the Broadcom card in my other box though, FWIW:
Code:
01:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
Subsystem: Hewlett-Packard Company Ethernet 10Gb 2-port 530SFP+ Adapter
Also, to anyone else reading who has a DT122 in hand already: I'm moving in about 3.5 weeks and my HackerMan(tm) time is super limited right now, if anyone else can help answer some of these questions that would be awesome. I'll have plenty of time again starting at the beginning of June.