It's not the bandwidth or speed of 1 GbE, it's the fact that there is only one NIC per NUC. Proper operation of Ceph requires two NIC's per node (if I'm going to use Ceph, which I would like to if possible).
I could use a USB3-to-Ethernet adapter for the second NIC, but this gives me a bad feeling. I'd rather install another real Ethernet adapter in each NUC if possible.
They don't need to share storage, but if they could then that would be fantastic. I will forego the use of Ceph if necessary, but I'd rather not if there's a way of implementing it on this cluster.
Correct. I wouldn't get these NUC's if I were purchasing them today. But I'm on a mission to make do with what hardware I have and to minimize spending money. This is just one option of several that I'm currently investigating using existing hardware. And using NUC's to experiment with OpenStack and Ceph is very convenient (aside from the single-NIC issue).
Well, there are several ways of dealing with the single NIC NUC issue, and all of them have their drawbacks.
a) Managed switch with VLAN tagging
That might or might not work since if you saturate a port, all the VLANs won't help you too much. And it's still a single source of failure.
b) Try an
Allied Telesis AT29M2-SC Fiber NIC.
This is probably the cheapest M.2 Key-E (PCIe x1) NIC you can buy, as there is an eBay seller out who sells them
for about 10 USD each. The guy has been vetted by forum members here, and is fairly legit (at least one of us has been to the vendor's warehouse location out in Long island). The card is a volume item as it was designed to work on the HP t730 thin clients (which is quite popular here as a cheap super-NUC - I should know,
I wrote the guide for it here). It's based on the old Broadcom Tigon (tg3) so it's a mature and well understood design with rock solid driver support for everything out there.
The problem?
- It might not fit your chassis
- The connector on the card is Multimode SC (which is superseded by LC connectors) - and buying OM1/OM2 fibers (with SC on one side and LC on the other) for them are a bit of a sunk investment (low probability of reuse - everyone wants singlemode optical these days)
- You need media converters (with appropriate connectors) to connect to standard network equipment, which negates the savings. There are multimode SC based media converters but they are
about 40 bucks/pop, so it's like an extra layer of complication that you don't need.
c) M.2 to PCIe extender
Something like this. It's not a volume item, and frankly, you still need a NIC for it. And running those cables are a bit of a hardware hack.
d) Something like
this
It's cheap, it'll probably work, but it's Realtek based, and until recently, Realtek NICs are radioactive dumpster fires in Linux, as in KPs, pegging out CPUs, way too many interrupts, not great. It's better nowadays, but I still don't like Realtek NICs for any heavy lifting. I am also not sure if it'll fit in the chassis.
There might be other ways like using an M.2 to MiniPCIe adapter and then using a MiniPCIe based NIC. The problem is that the adapters and the adapters are typically low volume items meant for embedded/industrial applications, so they are expensive.