Re-thinking homelab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Nipp

New Member
May 20, 2022
16
5
3
Haymarket, VA
Hey team!
Here is what I have as my current setup: Dell R620:HBA->NetApp. Running the TrueNAS Scale on a bare metal, a whole bunch of containers, managed by Portainer. CoreDNS, Traefik and all the good stuff. The capacity tier is on NetApp in a ZFS pool. It runs couple backup tasks to Google and AWS. Nothing super mission-crititcal.

It works well and it worked well for years. Now I want to keep sharpening my claws at home with cool IT stuff. I want to update a hardware to a bit more modern versions. And most importantly - I want it to be more flexible. The big /sad about the TrueNAS that it's not really designed to run docker-compose. Yes, I duct-taped the compose to it. But it might go kaput at any update. Plus, I want to migrate all this good stuff to Kubernetes. But I want to spin up my own cluster from the scratch, again, managed by Portainer and not with TrueNAS.

So, here is what we have in a wants list: 2-3 servers. One small and efficient, that runs for a long time on UPS, if power goes out. Another one is a big box, with ton of RAM and CPU. That one will be connected to a disk shelf or shelves. It will be running stuff that I can live without, if power goes out. Like Plex. And it will help the "efficient" box with containers that can do high availability. Some day I would love to play with UpTime's Compute Blades, when he releases it.

Now I need to figure out how to put this all together. The obvious is a Proxmox cluster. It does all I want and more. I just never used it. For some reason I am gravitating towards VMWare. I tried to run the initial setup in the ESXi, but I failed to figure out how to do a LAG in the vSphere, so I went baremetal. Yesterday I finally figured out my precious LAG on the vDS (yay!).
Now the storage. I don't have any bright ideas. Best one is to spin up a VM with whatever have a best implementation of ZFS and serve it back to vSphere via iSCSI. Or just serve the storage directly from that appliance. Or is there a better idea how to utilize 20+ 14Tb drives in the SAS shelf? May be some cool trick to serve it to K8s directly? vSAN? Ceph? The data on big array is not critical. All important stuff I back up to cloud. If I lose data from it, it's not ideal, but on other hand it's not bad enough to go with data replication.

I would greatly appreciate ideas how to implement something unnecessary complicated but cool.
(I am Ops/DevOps - practicing this is useful for my job)

Thank you.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
Sounds like a good plan.

I can tell you my experience with serving ESXi datastores via NFS from TrueNAS inside an ESXi VM. It works surprisingly well. One of my ESXi hosts has a SAS disk shelf (24x SSD), w/ HW RAID5.

The array on the disk shelf is a local datastore, then TrueNAS has a 2TB vmdk in it, which is served-back to ESXi (vCenter) via NFS, and mounted on all the hosts. The performance is very good, it is similar to the underlying SSD disk shelf array. I do run 100G network so that helps.

One thing to be aware of is the power-on order and delay. The shelf must power-on first, go through its init, then the host powered-on, then TrueNas started, then the other hosts.

If the power-on is done out of order, it eventually self-heals once everything is on, but the VMs inside the TrueNAS NFS datastore will be in a disconnected state until they auto re-mount.
 

CyklonDX

Well-Known Member
Nov 8, 2022
851
279
63
zfs over iscsi is bad idea; Its possible, but well thats my opinion.

There are k8 storage plugins for 'direct' zfs access or through nfs.


Personally in my homelab i run zfs on its own small supermicro 1u box that has plenty of ram (256G), and 2-4x 12Gbps sas ssd's in raid1 or raid10(or raid1+spare/mirror), and single(or few) nvme's for caching, and SAS3 HBA 8e card (with supermicro chassis jbod attached that has 24x 3.5" 4U box ), and this connects back to main kvm box through switch with on each side using teamed 2x 10Gig links using Intel x540 T2's.
If you wanted you could make it HA by adding 2nd 'zfs box 1u box', and maybe even 2nd jbod, and just create another zfs pool that reconnects back to those and treats both zfs pools, as zfs mirror but it would be over network using nfs.