How should I partition my Proxmox install on 2x800GB Raid1 array?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

el_pedr0

Member
Sep 6, 2016
44
1
8
47
I'm about to do my first Proxmox install and any advice on how to partition my drives would be much appreciated.

Helpful advice from this forum suggested I should install Proxmox on a Raid1 array of SSDs which would then have my OS and VMs/containers etc.

However, I went and got a pair of 800GB S3700s (thanks @Patrick) - so bigger than I originally anticipated. Should I still use these exclusively for the OS, VMs & containers? Or should I also create a partition for ZFS logs and a ZFS cache (whatever they are) ? Or should I use the excess space for more media/file storage?

My use case:
The system is for a home server for media storage, file syncing e.g. owncloud, media server, security cameras, remote backups of family data, etc. I can't imagine that it will ever be more than one node. The majority of my media & files will be put on my pair of 3TB WD reds in this system and I'll add more drives as I need them.

My hardware:
MB: Supermicro X11SSM-F
CPU: Xeon E3-1240 V5
RAM: 2x16GB ECC
SSD: 2x800GB Intel s3700
HDD: 2x3TB WD Red
(I also have a 2TB Seagate and a 2TB WD Green, but I think the WD Green is in the process of failing so I don't know if I'm going to use either of those 2TB drives)
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Don't partition them. ZFS mirror them, and let proxmox manage the space. It will create everything it needs.

ZFS cache and log devices are not relevant here. They are primarily useful to speed up spinning rust arrays.

You want there to be excess space, SSDs perform better when there is free space available. You could set up an area for files. I would use it for actively used data if I were to do that. Something like your documents folder, not media etc.. If you decide to do that, you can just create a ZFS file system and share it with samba/nfs/etc.. No need to partition, ZFS is a little different that way. I would put a size limit on it, just to make sure that there is always space available for the VM storage. The size can be changed at any time as needed.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
If you are going to use the S3700 800GB drives as a mirror, I would just let the Proxmox installer manage as @ttabbal mentioned. No need for cache on a big ZFS mirror.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Thanks both. I just installed Proxmox - what a breeze.

(Though the first time, the screen size wasn't adjusting itself correctly either for my IPMIView or when I stuck a monitor in, which meant that I couldn't see the right hand side of bottom of the screen. And crucially I couldn't click the Agree button or Next button to proceed through the install process. In the end I found screenshots online of the install screens so I knew which combination of Alt+[a-z] I needed to press to progress.

After I had successfully installed, I thought I'd install a second time in order to grab a screenshot of the problem, but I couldn't reproduce the error).
 
Last edited:

el_pedr0

Member
Sep 6, 2016
44
1
8
47
You want there to be excess space, SSDs perform better when there is free space available.
So should I even go as far as setting up a dataset specifically to ensure there's always free space. I.e. create a new dataset, give it a reservation size and then never store anything in that dataset? If so, how much space should I allow on my mirrored pair of 800GB S3700?

You could set up an area for files. I would use it for actively used data if I were to do that. Something like your documents folder, not media etc.. If you decide to do that, you can just create a ZFS file system and share it with samba/nfs/etc..
So when sharing datasets with samba/nfs, should I just set the appropriate properties using the zfs command?

And should I do this within the proxmox Debian OS, or should I create a virtual machine with something like ubuntu server, install zfs on linux, and then use that VM to handle the sharing (so that the proxmox install is left as unaltered as possible)?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I don't know that I would create an empty dataset, but it's not a terrible idea. For the size, I would see if there are some benchmarks showing performance at various %full levels. Another thing to consider, ZFS likes to be <80% full. I don't think it is as big an issue on SSDs, but on spinners it can cause pretty big problems.

For sharing, I did it right in the main Proxmox. There are various pros/cons to that. I initially tried using a container with bind mounts for sharing, but ran into a problem. You can't traverse datasets that way. So I couldn't bind mount /raid and access /raid/Backups. You have to do them all individually. That's really annoying when you have a lot of datasets. As the only configuration is /etc/exports and the samba conf file, I don't mind having that handled direct in the Proxmox OS. All the other services are managed in containers with just the mounts they need, which is a good way to handle that. Crashplan was the biggest hassle, but it's nice to have the backup store being the only writable mount point for that container. If it has an issue or is compromised somehow, it can only break that layer of backups.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Every answer to one of my questions reveals more stuff I don't know about and leads to loads of reading and loads more questions!

Please could you share more info on your Crashplan experience. How have you got it set up and what are the traps that I need to avoid?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Nice to hear Patrick, I'll keep an eye out for it.

For Crashplan, it wasn't really bad. Just need to manually create a bunch of bind mounts for everything you want it to be able to access. I used a Debian container for it. You create the container, manually install Crashplan from the command line, then set up the container to be a VNC server. Then you can VNC into the desktop and configure Crashplan.

Another option is to use the headless style setup from Crashplan's support documents. You use SSH to create a port-forward and copy the key from the server's config file so you can connect to it from another machine. It works well, but is annoying to manage the config file manually. VNC is just easier and only has to be set up once.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Re: Crashplan. Thanks. I think I can just about picture that. I guess there's a few bind mounts (probably read only) for the things you want to back up and one bind mount (rw) to the place that you store your back ups. Hopefully it will all make sense when I get round to setting it up.

I've been trying to get my head around the nfs thing though. It sounds like you're configuring nfs the usual way by editing /etc/exports. But I was watching a video on ZFS that seemed to suggest I should use the
Code:
zfs share
command to set up the nfs share. And I thought the video implied that no further configuration was necessary. Do you know if using zfs share or /etc/exports are equivalent. And if not, is there one that is considered best practice?

Edit 1: Actually I've just found this list of best practice (perhaps a bit dated?), which says: "If using NFS, use ZFS NFS rather than your native exports. This can ensure that the dataset is mounted and online before NFS clients begin sending data to the mountpoint." Does this still hold true?

I haven't even started looking at SMB, but was under the impression that it would be handled in a similar way by using the zfs share command.
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
If you are on Solaris, zfs share is the way to go. Sadly, it doesn't work well on Linux. So I just use the old fashioned way. It works fine.

It's possible that an update fixed it on Linux. I got the impression that the openZFS group wasn't in any hurry with it though.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
For sharing, I did it right in the main Proxmox.
Excuse asking the blinding obvious. Do you create a new user in your Proxmox Debian for every user that is going to access the SMB/NFS shares?

E.g. if I'm going to create a dataset to store my family's documents (they're windows users), I guess I would need to create a unix user in Debian for each Window's user.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Yes, you create a Linux user for each user ID and map it to a Windows user ID.

https://www.samba.org/samba/docs/using_samba/ch09.html

You can do groups, and map Windows user names to the usually shorter Linux names. I don't have a lot of users, so I usually just set the clients up for them. I don't really map Windows users, I just set a saved mount up for them. It's not super secure, but they only have access to a couple directories and everything's backed up, so it's not a huge risk. For the file-share only users, I disable the Linux shell logins. The only thing they can do is access the samba shares.

To make management a little simpler, you could install Webmin or similar. I stick to pretty basic configurations and just use a text editor. I generally do a shared folder everyone has read/write on, and home directories that are private for the user. Put them all in one ZFS dataset, or a couple, and it makes it easy to keep them backed up. It's pretty easy for me, I have all of 4 users. :)