Harvester HCI platform

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tearfirma

New Member
Feb 23, 2022
1
1
1
Hi All,

Has anyone here used harvester form Rancher? Considering for my new home lab setup currently built on the full VMware stack with vSAN and NSX etc.

I know about time long time reader first time actually posting.

Would be awesome if STH did a comparison article on Harvester as they are now maturing into a full product and appear to be getting close to something that could be considered a 1.0 release.

The reason I am looking is I want overlay networking and Proxmox doesn't do that. TrueNAS scale is out but no idea if they will add this at any point and VMware consumes alot of resources just to get it up but does offer me the advantage of learning I can take to work for our cloud provider setup.
 
  • Like
Reactions: AveryFreeman

globstarr

New Member
Feb 19, 2022
6
1
3
I just found out about this project tonight. I watched a video with Chen, the one that seems like a primary leader of the project.

It seems like it solves the issue of dealing with multiple physical nodes while wanting mostly k8s workloads. The attractive thing for me is the k8s api compatibility. I don't have the hardware for it though.

I agree that someone like STH should do a review, since it takes 3+ machines with 8-16 cores and not everyone has that kind of hardware. The docs seem to suggest it scales to "the edge" but system requirements for etcd seem to demand fancier machines for the first three nodes you add.
 

Sogndal94

Senior IT Operations Engineer
Nov 7, 2016
114
72
28
Norway
i am also looking in to Suse harvester, and would be fun to know more about it. Anyone tried HA, Failover clustering and vSan(ìsh) fetures?
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
It seems like it solves the issue of dealing with multiple physical nodes while wanting mostly k8s workloads. The attractive thing for me is the k8s api compatibility. I don't have the hardware for it though . . . it takes 3+ machines with 8-16 cores and not everyone has that kind of hardware. The docs seem to suggest it scales to "the edge" but system requirements for etcd seem to demand fancier machines for the first three nodes you add.
I set it up in VMs on my vCenter setup. I have 3x E5 v3/v4 nodes w/ 10+ cores ea, but usually only run 2 because I just don't need to waste the power.

Anyway, I didn't see any issue setting up 2 nodes, it didn't complain at me or anything. This was with harvester 1.0 stable right when it was released, I allocated 4 cores 24GB ram and ~60GB storage just for the OS of each node, which felt like the bare minimum, but seemed to run fine.

The OS installer is *really* easy for how complicated the stack is. The cluster is set up at the same time you install the OS. I know basically nothing about cloud orchestration, I only use podman for toolbox, the occasional app, and deploying web apps to a 1-node VPS.

I haven't gotten to transferring over any VMs yet though. I should try it out and write up something for the OP.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I've tested on 1 node.
I think it must be 16++ vCPU and RAM 32 GB ok,
with 8 vCPU I can create VM 4 vCPU.
For Windows, I use VirtIO driver from Fedora Project
If you're running a Windows VM as a kubelet, it's presumably only for one thing, though, right? E.g. 1x for AD, 1x for an admin center, they could both be single or dual core depending on your workload (?)

Also, do you think for a small network you could get rid of the 2x domain controller requirement? Kubernetes would presumably spin a new VM up if the first one took a shit, right? (theorhetically speaking, of course - I'd probably still want to run two DCs, but conceptually the cloud orchestration makes 1x DC less of a horrific idea, amirite?)
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
i am also looking in to Suse harvester, and would be fun to know more about it. Anyone tried HA, Failover clustering and vSan(ìsh) fetures?
AFAIK all that stuff is set up out of the box, as most kubernetes setups have at least the failover cluster and high availability. The HA storage is an addition called longhorn. I've heard mixed reviews, supposedly pretty slow but also extremely user friendly. Seems like that would have a place.

It's really easy to set up, try it out on a virtual machine or an old spare computer or something.

With any of this general area of infrastructure, it's all so new still you've gotta just kinda hang on and see where it ends up going. RedHat CoreOS was forked by KinVolk though who is now owned by Microsoft, RedHat keeps buying up companies that have released stuff like KVM, OpenShift, Ceph, et. al. (their list is HUGE), and then SUSE is now getting into it by plunking down its millions for Rancher's ecosystem. So the big players' arms race is in full effect, it's just hard to see which ones of them will be able to have enough staying power to last more than a few years.

I'd say, if you like the way they do things, go with that one. Personally, I think Rancher has staked itself out as the easiest entrance for complete noobs, making them basically "the Ubuntu of container platforms", if you will. Obviously, that's both positive and negative in many different respects. But the positive is to many people they might become ubiquitous with container ecosystems, e.g. if Ubuntu as the only Linux distro a lot of Windows users have ever seen or heard of, when they think Linux, they think Ubuntu. Before you even debate the merits of the software, they're in the most powerful position regarding awareness. It could even become like kleenex or xerox where the brand is synonymous with the technology - Ubuntu is fairly close to that in most circles, I'd argue, although not quite as much.
 
Last edited:
  • Like
Reactions: Marsh and Sogndal94

Sogndal94

Senior IT Operations Engineer
Nov 7, 2016
114
72
28
Norway
I might change my esxi 7 with vsan over to this, does it really need 3 nodes, or can i run it with two( or if needed a witness vm on nother host? I have this on vmware).
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I might change my esxi 7 with vsan over to this, does it really need 3 nodes, or can i run it with two( or if needed a witness vm on nother host? I have this on vmware).
Give it a shot! Hope the witness works, as that's what I'd probably end up doing, too. FWIW, I had awful split-brain behaviors during testing of DRBD9 between two hosts so I've been pretty scared of HA FS reliability since.

For my use case, I'm having a hard time getting past dependence on passthrough devices and host disks assigned to specific VMs I use for things like TV recording, security surveillance recording, and file servers that use ZFS with HBAs using PCIe passthrough (and the disks subsequently attached directly to them).

If you have any creative suggestions for mitigating these issues in an HA kubernetes environment, I'd love to hear your ideas! The closest I've seen that looks promising is k8s-device-plugins for pcie passthrough: kubernetes-device-plugins/README.vfio.md at master · kubevirt/kubernetes-device-plugins -- even this looks deprecated, not sure what's replaced it.

I have two hosts with exact-same motherboard model (x10slr-f) and could use exact-same LSI 9207-4i4e IT HBAs in exact-same slot and exact-same bios settings lol. It starts to sound pretty complicated, but perhaps possible in a perfect world...
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
Should be Node Feature Discovery now, right?

In the big picture, though, this is the unavoidable tension between node-specific hardware vs treating nodes as cattle. If you want all the HA failover goodness that k8s promises, then you need not just multiple nodes but multiple copies of all the hardware that's needed to support your services.

At home, I have a single Aetoec dongle interfacing to my Z-Wave IoT devices. I deploy its MQTT broker software via k8s, but in reality it's futile to run it on any node other than the one node that has the hardware dongle (I use a nodeSelector). In this case, I might as well have deployed the MQTT broker baremetal (though then I'd lose the ease of using helm, deploying ingress, and using CNI to talk to HomeAssistant core -- which is unrestricted in which node it can run on).
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Should be Node Feature Discovery now, right?

In the big picture, though, this is the unavoidable tension between node-specific hardware vs treating nodes as cattle. If you want all the HA failover goodness that k8s promises, then you need not just multiple nodes but multiple copies of all the hardware that's needed to support your services.
Well yeah, I know, that's why I said that lol It'll be really hard to treat nodes as cattle without considerable tooling or a major shift in paradigm about how the software is used. AFA tooling I already have very similar hardware on at least two devices (and could be 3) basically for this reason.

But node feature discovery I hadn't heard of that yet. Is this what deprecated k8s-device-plugins? They sound like the type of thing that might work in tandem.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
I'm not certain as to the relationship between NFD and device plugins, but for me at least NFD was super easy to setup via helm chart, and adds a bunch of node labels that I can use for nodeSelector et al. Just had to configure it to spit out USB IDs for CDC devices like my z-wave dongle.
 

Vesalius

Active Member
Nov 25, 2019
252
190
43
The closest I've seen that looks promising is k8s-device-plugins for pcie passthrough:
Harvestor is prioritizing SR-IOV and PCI pass-through for the 1.1.0 release. Whether they get there or this satisfies your storage issues or not is to be determined.
 
  • Like
Reactions: Brian Puccio

hchasens

New Member
Feb 10, 2022
5
3
3
I think I understand why Harvester doesn't install Rancher on the same system baremetal. But, considering how tightly integrated the UI is for Rancher to run on Harvester it bugs me that I need to run a VM on Harvester to get a Rancher node running on the same system. I don't like needing the VM overhead on every node. As it is now, it's lighter to run Rancher on an LXC on Proxmox. Kinda defeats the purpose if you ask me.

I'd like to see Harvester run Rancher either baremetal or in a container. None of this VM BS.
 
  • Like
Reactions: Brian Puccio

Vesalius

Active Member
Nov 25, 2019
252
190
43
I think I understand why Harvester doesn't install Rancher on the same system baremetal. But, considering how tightly integrated the UI is for Rancher to run on Harvester it bugs me that I need to run a VM on Harvester to get a Rancher node running on the same system. I don't like needing the VM overhead on every node. As it is now, it's lighter to run Rancher on an LXC on Proxmox. Kinda defeats the purpose if you ask me.

I'd like to see Harvester run Rancher either baremetal or in a container. None of this VM BS.
Have you tried installing Rancher on a Harvester Alpine linux vm? If possible, that should be as lightweight as most any other linux proxmox LXC.
 

hchasens

New Member
Feb 10, 2022
5
3
3
Have you tried installing Rancher on a Harvester Alpine linux vm? If possible, that should be as lightweight as most any other linux proxmox LXC.
No. I'm sure it'd be almost as light as an LXC container but it just bugs me we're still using virtualization technology to run these two services that should be tightly integrated. I guess it's more of a peeve than having actual reasoning. It just adds a level of unnecessary complexity to what could be a straightforward system. Imagine having to provision and run a separate VM to get access to the LXC feature in proxmox. It wouldn't make much sense.