Hey team!
Here is what I have as my current setup: Dell R620:HBA->NetApp. Running the TrueNAS Scale on a bare metal, a whole bunch of containers, managed by Portainer. CoreDNS, Traefik and all the good stuff. The capacity tier is on NetApp in a ZFS pool. It runs couple backup tasks to Google and AWS. Nothing super mission-crititcal.
It works well and it worked well for years. Now I want to keep sharpening my claws at home with cool IT stuff. I want to update a hardware to a bit more modern versions. And most importantly - I want it to be more flexible. The big /sad about the TrueNAS that it's not really designed to run docker-compose. Yes, I duct-taped the compose to it. But it might go kaput at any update. Plus, I want to migrate all this good stuff to Kubernetes. But I want to spin up my own cluster from the scratch, again, managed by Portainer and not with TrueNAS.
So, here is what we have in a wants list: 2-3 servers. One small and efficient, that runs for a long time on UPS, if power goes out. Another one is a big box, with ton of RAM and CPU. That one will be connected to a disk shelf or shelves. It will be running stuff that I can live without, if power goes out. Like Plex. And it will help the "efficient" box with containers that can do high availability. Some day I would love to play with UpTime's Compute Blades, when he releases it.
Now I need to figure out how to put this all together. The obvious is a Proxmox cluster. It does all I want and more. I just never used it. For some reason I am gravitating towards VMWare. I tried to run the initial setup in the ESXi, but I failed to figure out how to do a LAG in the vSphere, so I went baremetal. Yesterday I finally figured out my precious LAG on the vDS (yay!).
Now the storage. I don't have any bright ideas. Best one is to spin up a VM with whatever have a best implementation of ZFS and serve it back to vSphere via iSCSI. Or just serve the storage directly from that appliance. Or is there a better idea how to utilize 20+ 14Tb drives in the SAS shelf? May be some cool trick to serve it to K8s directly? vSAN? Ceph? The data on big array is not critical. All important stuff I back up to cloud. If I lose data from it, it's not ideal, but on other hand it's not bad enough to go with data replication.
I would greatly appreciate ideas how to implement something unnecessary complicated but cool.
(I am Ops/DevOps - practicing this is useful for my job)
Thank you.
Here is what I have as my current setup: Dell R620:HBA->NetApp. Running the TrueNAS Scale on a bare metal, a whole bunch of containers, managed by Portainer. CoreDNS, Traefik and all the good stuff. The capacity tier is on NetApp in a ZFS pool. It runs couple backup tasks to Google and AWS. Nothing super mission-crititcal.
It works well and it worked well for years. Now I want to keep sharpening my claws at home with cool IT stuff. I want to update a hardware to a bit more modern versions. And most importantly - I want it to be more flexible. The big /sad about the TrueNAS that it's not really designed to run docker-compose. Yes, I duct-taped the compose to it. But it might go kaput at any update. Plus, I want to migrate all this good stuff to Kubernetes. But I want to spin up my own cluster from the scratch, again, managed by Portainer and not with TrueNAS.
So, here is what we have in a wants list: 2-3 servers. One small and efficient, that runs for a long time on UPS, if power goes out. Another one is a big box, with ton of RAM and CPU. That one will be connected to a disk shelf or shelves. It will be running stuff that I can live without, if power goes out. Like Plex. And it will help the "efficient" box with containers that can do high availability. Some day I would love to play with UpTime's Compute Blades, when he releases it.
Now I need to figure out how to put this all together. The obvious is a Proxmox cluster. It does all I want and more. I just never used it. For some reason I am gravitating towards VMWare. I tried to run the initial setup in the ESXi, but I failed to figure out how to do a LAG in the vSphere, so I went baremetal. Yesterday I finally figured out my precious LAG on the vDS (yay!).
Now the storage. I don't have any bright ideas. Best one is to spin up a VM with whatever have a best implementation of ZFS and serve it back to vSphere via iSCSI. Or just serve the storage directly from that appliance. Or is there a better idea how to utilize 20+ 14Tb drives in the SAS shelf? May be some cool trick to serve it to K8s directly? vSAN? Ceph? The data on big array is not critical. All important stuff I back up to cloud. If I lose data from it, it's not ideal, but on other hand it's not bad enough to go with data replication.
I would greatly appreciate ideas how to implement something unnecessary complicated but cool.
(I am Ops/DevOps - practicing this is useful for my job)
Thank you.