MiniKnight

The NUC based home lab

Evan

Well-Known Member
Jan 6, 2016
3,123
522
113
Ok no detail in this post but used to run nuc6i3syh with hyper-v and idled with a Samsung 860 evo 512gb m.2 and 2tb pm893 2.5” close enough to 10w
 

ullbeking

Active Member
Jul 28, 2017
395
33
28
41
London
Hello all, I have about 3-4 5th generation NUC's (2x NUC5i5RYH and 2x NUC5i7RYH) and I bought them with the specific intention of building a quiet, small Ceph cluster. I haven't read the links in this thread in detail yet, and I'm still trying to make friends and learn all about my NUC's, but a guideline would be super useful, please...

My primary issue is that these NUC's have only one NIC each. Is this going to make it more trouble that it would be worth, to set up a NUC-based homelab?
 

WANg

Well-Known Member
Jun 10, 2018
870
496
63
Hello all, I have about 3-4 5th generation NUC's (2x NUC5i5RYH and 2x NUC5i7RYH) and I bought them with the specific intention of building a quiet, small Ceph cluster. I haven't read the links in this thread in detail yet, and I'm still trying to make friends and learn all about my NUC's, but a guideline would be super useful, please...

My primary issue is that these NUC's have only one NIC each. Is this going to make it more trouble that it would be worth, to set up a NUC-based homelab?
Depends on your use. What are you planning to do on them?
 

ullbeking

Active Member
Jul 28, 2017
395
33
28
41
London
Depends on your use. What are you planning to do on them?
I want to create a quiet home server that will be put in my living room. The #1 requirement is quietness. I know I can fit 32 GB in each of these NUC's, which is great.

But the CPU's are only 2c/4t. In addition to joining them via Ceph for storage, I'd like to run some a clustering system so I have a system that effectively is equivalent to about 6 cores. I'm currently considering OpenStack, and possibly oVirt.

The clustering via OpenStack is more important than sharing storage via Ceph.

I will be running lightweight loads, nothing special. For example, web servers, photo albums, personal music streaming services. I also need an environment to develop software on from a remote client. For example, I would log in using SSH from a laptop to a node in the cluster to develop and build software.
 

WANg

Well-Known Member
Jun 10, 2018
870
496
63
I want to create a quiet home server that will be put in my living room. The #1 requirement is quietness. I know I can fit 32 GB in each of these NUC's, which is great.

But the CPU's are only 2c/4t. In addition to joining them via Ceph for storage, I'd like to run some a clustering system so I have a system that effectively is equivalent to about 6 cores. I'm currently considering OpenStack, and possibly oVirt.

The clustering via OpenStack is more important than sharing storage via Ceph.

I will be running lightweight loads, nothing special. For example, web servers, photo albums, personal music streaming services. I also need an environment to develop software on from a remote client. For example, I would log in using SSH from a laptop to a node in the cluster to develop and build software.
Okay, so why is the 1Gbit ethernet port such an issue, then? Most of them are Intel i217s or its equivalent, which should not hog up too much CPU cycles flipping bits. The NUC based nodes don't need to share storage or write back terbabytes of data / won't need multi-gigabyte/sec NVMe storage, right? Besides, if you want more powerful CPUs you would probably not want Core i5/i7 U-series based Intel NUCs anyways.
 

ullbeking

Active Member
Jul 28, 2017
395
33
28
41
London
Okay, so why is the 1Gbit ethernet port such an issue, then? Most of them are Intel i217s or its equivalent, which should not hog up too much CPU cycles flipping bits.
It's not the bandwidth or speed of 1 GbE, it's the fact that there is only one NIC per NUC. Proper operation of Ceph requires two NIC's per node (if I'm going to use Ceph, which I would like to if possible).

I could use a USB3-to-Ethernet adapter for the second NIC, but this gives me a bad feeling. I'd rather install another real Ethernet adapter in each NUC if possible.

The NUC based nodes don't need to share storage or write back terbabytes of data / won't need multi-gigabyte/sec NVMe storage, right?
They don't need to share storage, but if they could then that would be fantastic. I will forego the use of Ceph if necessary, but I'd rather not if there's a way of implementing it on this cluster.

Besides, if you want more powerful CPUs you would probably not want Core i5/i7 U-series based Intel NUCs anyways.
Correct. I wouldn't get these NUC's if I were purchasing them today. But I'm on a mission to make do with what hardware I have and to minimize spending money. This is just one option of several that I'm currently investigating using existing hardware. And using NUC's to experiment with OpenStack and Ceph is very convenient (aside from the single-NIC issue).
 

Evan

Well-Known Member
Jan 6, 2016
3,123
522
113
If you have a good L3 switch then just create what looks like 2 ports by using vlan’s.
Have done that before. 1G kind of limits performance but really it should work just fine for what you want. I used to use NUC at home in a similar way, but not ceph.
 

WANg

Well-Known Member
Jun 10, 2018
870
496
63
It's not the bandwidth or speed of 1 GbE, it's the fact that there is only one NIC per NUC. Proper operation of Ceph requires two NIC's per node (if I'm going to use Ceph, which I would like to if possible).

I could use a USB3-to-Ethernet adapter for the second NIC, but this gives me a bad feeling. I'd rather install another real Ethernet adapter in each NUC if possible.

They don't need to share storage, but if they could then that would be fantastic. I will forego the use of Ceph if necessary, but I'd rather not if there's a way of implementing it on this cluster.

Correct. I wouldn't get these NUC's if I were purchasing them today. But I'm on a mission to make do with what hardware I have and to minimize spending money. This is just one option of several that I'm currently investigating using existing hardware. And using NUC's to experiment with OpenStack and Ceph is very convenient (aside from the single-NIC issue).
Well, there are several ways of dealing with the single NIC NUC issue, and all of them have their drawbacks.

a) Managed switch with VLAN tagging
That might or might not work since if you saturate a port, all the VLANs won't help you too much. And it's still a single source of failure.

b) Try an Allied Telesis AT29M2-SC Fiber NIC.

This is probably the cheapest M.2 Key-E (PCIe x1) NIC you can buy, as there is an eBay seller out who sells them for about 10 USD each. The guy has been vetted by forum members here, and is fairly legit (at least one of us has been to the vendor's warehouse location out in Long island). The card is a volume item as it was designed to work on the HP t730 thin clients (which is quite popular here as a cheap super-NUC - I should know, I wrote the guide for it here). It's based on the old Broadcom Tigon (tg3) so it's a mature and well understood design with rock solid driver support for everything out there.

The problem?
- It might not fit your chassis
- The connector on the card is Multimode SC (which is superseded by LC connectors) - and buying OM1/OM2 fibers (with SC on one side and LC on the other) for them are a bit of a sunk investment (low probability of reuse - everyone wants singlemode optical these days)
- You need media converters (with appropriate connectors) to connect to standard network equipment, which negates the savings. There are multimode SC based media converters but they are about 40 bucks/pop, so it's like an extra layer of complication that you don't need.

c) M.2 to PCIe extender
Something like this. It's not a volume item, and frankly, you still need a NIC for it. And running those cables are a bit of a hardware hack.

d) Something like this
It's cheap, it'll probably work, but it's Realtek based, and until recently, Realtek NICs are radioactive dumpster fires in Linux, as in KPs, pegging out CPUs, way too many interrupts, not great. It's better nowadays, but I still don't like Realtek NICs for any heavy lifting. I am also not sure if it'll fit in the chassis.

There might be other ways like using an M.2 to MiniPCIe adapter and then using a MiniPCIe based NIC. The problem is that the adapters and the adapters are typically low volume items meant for embedded/industrial applications, so they are expensive.
 
Last edited:

Marsh

Moderator
May 12, 2013
2,292
1,106
113
Run OVS with vlan , it won't go any faster than 1gb .
The benefit is that you do not need a manage switch, any dumb switch would work.
 

ullbeking

Active Member
Jul 28, 2017
395
33
28
41
London
If you have a good L3 switch then just create what looks like 2 ports by using vlan’s.
Why does it need to be a layer-3 switch? (Or router..?)

I am reconfiguring my home server, which lives in our living room. I am researching L2 switches in any case, and am already in the market.

So far my favorite choice is the Ubiquiti ES-24-LITE EdgeSwitch Lite ( Ubiquiti - EdgeSwitch® Lite ). It does layer-2 switching and also provides layer-3 routing. I don't need the layer-3 routing capability, and I intend to leave layer-3 disabled in the switch's configuration.

Routing will be implemented with a Supermicro A1SRi-2758F 1U server. I've used these many times and they work well for this kind of thing. (I know about the Erratum AVR.54 defect.)

Have done that before. 1G kind of limits performance but really it should work just fine for what you want. I used to use NUC at home in a similar way, but not ceph.
Yes, I was pondering whether and how performance will be limited. The ES-24-LITE specifies "Total Non-Blocking Throughput" of 26 Gbps. Do you think this will allow for enough bandwidth to tag packets for two virtual NIC's on a single 1 GbE physical NIC, in each NUC?

I plan on using 3x - 5x 5th generation NUC's (NUC5i5RYH or NUC5i7RYH) each with 32 GB RAM.
 

Evan

Well-Known Member
Jan 6, 2016
3,123
522
113
Does not need to be an L3 switch, for my specific use case I needed some traffic to traverse vlan’s at decent speed, if I let my router handle this then that was a bottleneck and more backwards and forwards than required.

But your totally right you can do this with a smart L2 switch. (Especially if your just using private vlan’s for separation of traffic)