Pve os install

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

crembz

Member
May 21, 2023
35
0
6
Decided to build my home lab using proxmox as opposed to esxi. Major reasons against vsphere are hw compatibility (especially NICs) and the uncertainty with the Broadcom thing.

I've done some testing using zfs which is really like functionally however I noticed that when using nvme vs SATA drives with zfs on pve there was 0 difference in performance with both returning almost identical results.

Question: is this specific to pve or generally experienced with zfs? Would a pve os install be better off on consumer nvme zfs mirror with enterprise sata for vm stores or the other way around. I'm looking to either pass through the nvme drives to a truenas VM or the entire SATA controller to the truenas VM depending on where the pve root is installed.
 

zunder1990

Active Member
Nov 15, 2012
210
72
28
You have to remember that proxmox is based on modern version of Debian so hardware support is very good. Proxmox also ships with native zfs support. What I have done for my installs is install proxmox on a boot disk then keep VMs on a different set of disks.
 

crembz

Member
May 21, 2023
35
0
6
You have to remember that proxmox is based on modern version of Debian so hardware support is very good. Proxmox also ships with native zfs support. What I have done for my installs is install proxmox on a boot disk then keep VMs on a different set of disks.
Is your boot disk a zfs mirror? Are you using nvme or sata ssd or HDD?
 

zunder1990

Active Member
Nov 15, 2012
210
72
28
My boot disks are all single hdd using the default OS formatting which I think is ext4. Not long ago I was using zfs on my sata ssd but I have now switched to ceph on those sata ssd. I now have a 5 node proxmox and ceph cluster.
 
Last edited:

crembz

Member
May 21, 2023
35
0
6
Ah yeah I suppose you don't need to protect much if you're using ceph.

Any advice with regards to sizing or gotchas you can pass on?
 

zunder1990

Active Member
Nov 15, 2012
210
72
28
I mean what are your goals, single node or multiple nodes.
If multiple node better to have odd numbers, 10gb ethernet min, ssd only for vm disks.
Single node use zfs storage for vm disk again ssd only.
If you need bulk storage dont bother with something pass though disk and freenas. Just ssh into proxmox host and enable nfs or smb exports directly from the host, again it is just normal Debian under the hood.
 
  • Like
Reactions: T_Minus