Best boot and vm drive configuration for Proxmox

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

glitch452

New Member
Jan 2, 2024
6
2
3
Hi Everyone,

I’m putting together a new home-lab build and I want to try out Proxmox VE for the first time.
I’m looking for some advice on storage configurations for the boot drive and a drive to run the VMs from.
  • What is your preferred configuration?
  • Do you use NVMe or SATA SSDs for both, or strictly enterprise grade SSDs?
  • How do you handle redundancy & Backus?
  • Do you mirror either of those drives? If so, how? Is there a good software RAID (ZFS/Motherboard RAID) way to provide redundancy that also works for the boot drive?
Let me know your approach, and importantly, why you made the decisions you did.

Thank you!
 

rmccue

New Member
Jan 7, 2024
3
1
3
rmccue.io
Looking to know the same! I’ve read some interesting things about Proxmox writing to the boot drive pretty extensively (status and logs) which makes me a little wary, but much of that information also seems to be a bit old now.
 

SlowmoDK

Active Member
Oct 4, 2023
141
77
28
A decent NVMe drive will be more than enough for both boot and vm storage

redundancy is imo more important for backups/archival data

the hypervisor itself (proxmox) can be easily replaced, if proper backups are in place.

Ofc if you have the ability to also have redundancy in proxmox itself so much the better :)
 

SlowmoDK

Active Member
Oct 4, 2023
141
77
28
My Main Proxmox host has 2x 2tb in ZFS raid1 (mirror)
and my Proxmox Backup Server run from a 5105 box with same config (2x 2tb)

PBS come highly recommended, by far the best solution for backups (incremental and using snapshots)

But this is a production setup, for homelab/testing one drive in each host will get you same functionality, and still one drive can fail and you are fine
 

louie1961

Active Member
May 15, 2023
166
63
28
I have installed and re-installed Proxmox probably 8 or 9 different times messing around with different drive configurations. The first thing I would say is you are likely to do the same. Its just a fact of life as you learn. My current configuration is a HP elitedesk 800 G9 - 1 liter PC as my main Proxmox host at the moment. It has an intel i5-12500T processor and 32 GB of ram. For reference I am running two LXC containers (Leantime and Tracks), two VMs as docker hosts (one on my trusted VLAN and one for my internet facing VLAN). I run 14 different docker containers: Grocy front end, Grocy back end, 4 different Cloudflare tunnels, 2 instances of watchtower, Heimdall, Uptime Kuma, Portainer, Portainer agent, some maria DBs, photoprism, and a couple others I am forgetting). I also run two VMs running Wordpress and one VM running Nexcloud. My drive set up is two 1TB NVME SSDs in a zfs mirror that serve both as the boot drive and the VM drive. I think I am using about 40GB all total on that mirror. I run very small VMs. My default is to start them out at 8GB and expand them as needed. I have a NAS mounted as storage in Proxmox, and all of my backups, ISOs and container templates go there. The NVME drives have been in service in one Proxmox machine or another for a year now and they have zero wearout according to smart. They are nothing special, just Team brand pcie gen 3 drives.I have disabled the folliowing services since I don't run a cluster: corosync, pve-ha-crm, pve-ha-lrm. I read that doing so will reduce wear and tear on your SSDs. I also have another NAS that I mount all my docker volumes to as well as have SMB mounts to NextCloud and Wordpress. So I have very little actual data sitting on my proxmox machine. All my VMs and LXC containers are backed up nightly

On the other side of the coin, my old proxmox server was an HP Z-640 with a 20 core Xeon and 64gb of ram. I had installed an icydock in it that would hold 6 2.5 inch SSD drives. I had two 256 GB SSDs in a mirror as the boot drive, two 2TB SSDs in another ZFS mirror as storage, and I had the two NVME drives in an Asus Hyper M2 PCIE adapter. I also had two spinning drives that were passed through to an instance of Openmediavault.

I have seen folks with a lot more drives than that in a Proxmox host, but for me, I decided that I wanted to run my pfsense firewall in its own hardware, so that everytime I take down my proxmox host, I don't trash the family's internet. I also like having my data reside off of the proxmox host, so I now have two NAS devices (a synology and a terramaster) each sitting on different VLANs (similar to what I do with my docker hosts), and a third "NAS" that I made out of a Raspberry Pi 4. Both my Terramaster and my Synology back up nightly to the Raspbery Pi and to the cloud. This way I have 3 total copies of my data, 2 onsite, and 1 remote. Other people prefer to have many disks in a zfs raidz1 or raidz2 array and they run all their apps on one box (truenas, pfsense, plex, video editors, etc.). To me that's too many eggs in one basket.

The fun thing about Proxmox is that a fresh install only takes minutes. So plan on installing it and breaking it a few times until you get it just the way you like. That's half the fun of having a home lab.
 

glitch452

New Member
Jan 2, 2024
6
2
3
> The fun thing about Proxmox is that a fresh install only takes minutes. So plan on installing it and breaking it a few times until you get it just the way you like. That's half the fun of having a home lab.

I Love this ❤
 
  • Like
Reactions: Zedicus

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
Proxmox supports ZFS natively, so you can just configure the boot drive as a mirror during initial setup.
I use that mirror just for Proxmox and never store vm's on it.

Then I add nvme drives that will hold my vm's named nvme1, nvme2, etc.
If you keep the same storage names on other nodes, you can then to replication and migration directly between them without shared storage lie ceph or glusterfs.

All backups are handled by Proxmox Backup Server
 

kesawi

New Member
Feb 5, 2024
1
0
1
Then I add nvme drives that will hold my vm's named nvme1, nvme2, etc.
Do you format the nvme drives as ZFS which host your VMs? I'm migrating my Hyper-V server to Proxmox and was considering the following configuration:
  • 2 x Crucial MX500 250GB 3D NAND SSDs in ZFS Mirror as boot drives (also store ISOs and CT templates)
  • 2 x 850 PRO SATA III 256GB SSDs in ZFS Mirror for VMs and CT volumes
  • 2 x Seagte ST8000VN004 8GB HDDs in ZFS Mirror for VM bulk storage
I am currently using two of the SSDs as cache drives from my mirrored storage pool on Hyper-V, but my understanding is they potentially don't offer that much of an advantage using them for SLOG in zfs?