What drive configuration & FS for a home server using Proxmox and BTRFS (hopefully)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Hi all,

I'd love to run a Proxmox setup for my home server and really like the flexibility that BTRFS seems to offer for my personal files and media, but I know that BTRFS isn't great with VMs (by default) nor integrated into Proxmox. So I'm wondering whether I can do some sort of mix of ZFS and BTRFS and how to configure the drives.

1) Can I put Proxmox on a ZFS Raid 1 array of two drives, and then put all my data on a different BTRFS raid 1 array?

2) Should I install Proxmox on SSDs or HDDs? I understand that Proxmox does not run in memory so would kill a USB stick but I don't know if that also applies to SSDs.

3) Should I put my VMs/Containers on SSDs or HDDs? I think I read that performance is better if the VMs/containers are on SSDs.

4) Should I have a separate SSD for a ZFS cache?

5) Based on the answers above, what's my best drive configuration? My focus is getting the foundation set up right, so I don't mind buying a couple of extra drives now (SSDs or HDDs). But I would prefer to buy the disks for my data as and when required rather than trying to anticipate my future storage needs now, hence my strong preference for BTRFS over ZFS for my media & files. Also, I'm hoping to need only the 8 SATA ports on my MB rather than having to buy a controller. So I'm hoping I can get away with two ports for the OS (and VMs?) and have the other 6 for the data drives.

Hardware already owned:
CPU - Intel Xeon E3-1240 V5
MB - Supermicro X11SSM-F
Memory - 32GB DDR4 (comprising 2 * 16GB DDR 2133 sticks)
HDDs - 1 * 2TB WD Green, 1 * 3TB WD Red, 1 * 2TB Seagate​

Uses Currently planned (in various VMs/containers)
Media server (Plex, subsonic/MPD)
Media recorder (Mythtv/tvheadend)
Act as remote backup for family (Crashplan)
Create local backups to external drives (Crashplan)
Home security (zoneminder)
File server (Samba, Owncloud)​
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
So long as you add pairs or greater, you can expand zfs. Basically adding raid 1 pairs as you need storage.

BTRFS isn't ready for prime time. It just isn't. I wish it were, but it isn't.
 
  • Like
Reactions: el_pedr0

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
As you seem to be fine with mirrors, ZFS is the better choice. It's every bit and flexible as anything else in that mode, and much more mature. If you were wanting to use parity modes, other options are more flexible.

If you don't mind the price of SSDs, use a mirror pair for Proxmox and VM images. HDDs work, and for what you mentioned would likely perform fine, but SSDs are faster. SSDs don't die any faster than HDDs for the OS drive in my experience. USB sticks I haven't had great luck with. I use a pair of laptop HDDs for the proxmox host and VM images with a ZFS mirror, because they were already sitting there. Performance works fine for my needs. With the price of used enterprise SSD dropping fast, I'll probably get some smaller units for the OS/VMs in the next year or so.

For data storage, if you want to use the drives you already have, I would mirror the 2 2TB drives, and add another 3TB to make a mirror with the one you have. 5TB usable. From there you can add 2 more if needed and still not need an external controller. After that, you would need to either add a controller or upgrade existing drives instead, which is very easy with ZFS mirrors.

If you haven't already, do the mod on that WD green to disable to auto-parking thing it has to prolong its life.
 
  • Like
Reactions: el_pedr0

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
BTRFS isn't ready for prime time. It just isn't. I wish it were, but it isn't.
The big issue right now is still BTRFS RAID 5/6. If you are using RAID 1 or 10, you may as well go ZFS with Proxmox since it is built-in.
  1. You can ZFS RAID 1 the boot SSDs
  2. I use SSDs. I would actually get ~120GB or larger SSDs due to price. Having extra capacity is nice because sometimes you will want to download ISOs, transfer a VM or etc and it is nice to have extra capacity given today's pricing. Get something like a S3500 for this.
  3. SSDs if you can.
  4. If you all SSD array, you can skip the SSD cache. For a hard drive array, it may help but you can add this later. You can also add a NVMe drive as a cache.
  5. I would put VMs on SSDs if possible. It does make a big difference.
 

el_pedr0

Member
Sep 6, 2016
44
1
8
47
As you seem to be fine with mirrors, ZFS is the better choice.
Given that I only have a few disks right now, the overall data storage offered by RAID1 and RAID6 are about the same. So in my overly optimistic world, I would have used BTRFS Raid1 now and then effortlessly changed to BTRFS RAID6 at a later date when I needed to add more disks at which point RAID6 would give me more storage space. (By which time BTRFS RAID6 would be stable, because the world is a happy place)

If you were wanting to use parity modes, other options are more flexible.
Following on from above, my aim was to have a parity mode when the number of disks meant that it would yield greater storage space than a mirror. So what other options did you have in mind?

I like your suggestion of adding one 3TB drive to my current collection. The pro is that I have minimal outlay now, but the con is that it locks me on a path of mirroring and therefore will reduce my eventual storage utilisation. I'll mull that one over

If you haven't already, do the mod on that WD green to disable to auto-parking thing it has to prolong its life.
I didn't know about this. Thanks.
 

el_pedr0

Member
Sep 6, 2016
44
1
8
47
@ttabbal @Patrick What are the benefits of enterprise SSDs and are they justified in a home server environment (asking because I genuinely don't know)? What are your thoughts on something like a pair of Samsung EVO 850 instead?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Restripe to parity from a mirror? yeah, even if it's "supported", I don't think I'd trust it without a full backup. And at that point, you can just create a new array and copy the data.

Unraid/snapraid are the alternative suggestions I hear all the time. I can't provide much more than that though, as I only run ZFS.

For what it's worth, I went from 2 6-disk raidz2s (raid60), to striped mirrors. My biggest reason was that I could upgrade 2 drives for a storage increase, get more performance (10gbit), handle more client machines, and not lose the advanced protection of ZFS. My setup would occasionally stutter for media playback when I get about 4 machines going with the other stuff I have the box doing. There are pros and cons to all the options, so you kind of have to decide what will work for you. Personally, I want a filesystem to have at least 5 years of stable history before I trust it. In 5 years a lot can change, and BTRFS isn't stable yet, for parity modes.

As for enterprise SSDs, they last longer, run faster, and can actually meet the specs. Consumer drives generally fall over in sustained high-load situations, like a VM store. I'd take a pair of used S3500s over a pair of new 850 evo any day of the week. They also include features like power-loss protection that consumer drives often leave out. And they tend to cost less, and even though they are used, they generally have a ton of life left. As with any drive, I would thoroughly test them, but I test new drives out of the box. Even SSDs :)
 
  • Like
Reactions: el_pedr0

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@el_pedr0 - @ttabbal summed it up nicely. This may be another good read: Used enterprise SSDs: Dissecting our production SSD population

And on 850 EVOs v. data center SSDs:
Used S3700 800GB drives $250 ea: $249.99 FS: Intel DC S3700 800GB Model SSDSC2BA800G301

Those have something like 14.6PB write endurance so even used they probably have multiples of write endurance versus the 850 EVO is about 150TB of rated write endurance. So about 100x as much write endurance at similar price points. Those will be used but I would be shocked if they had 1PB written of the 14.6PB rated.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Hmm, I'm UK based, but have a family member in the US right now who's coming back in 2 weeks. Maybe I should get those S3700 drives shipped to her.

I'm brand new to enterprise SSDs, but a quick look on ebay suggests I can get 240GB S3500 for around £80 ($110) or 300GB S3500 for around £105 ($140). So your deal looks amazing value. They're manufacturer refurbished too - does that count for much? The ebay listing doesn't mention a warranty - does that matter?

Even though it seems to be hugely overkill for my purpose, I can easily persuade myself that it helps future-proof my set up.

I hope you don't mind me relying a bit on your experience and judgement because I'm at work at the moment so can't do the proper research, but also don't want to miss this deal if you think it's a no-brainer.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Restripe to parity from a mirror? yeah, even if it's "supported", I don't think I'd trust it without a full backup. And at that point, you can just create a new array and copy the data.
One of the features of btrfs is that it does support on-the-fly changes in raid level like that. At least, it is as supported as the different raid levels are, so not recommended with parity raids right now. Theoretically, btrfs can even support different raid levels for different files all within the same filesystem - so your media files could be raid-6, while your VM disk files are raid-1, there's just no way to configure it to do that yet.

But that is one of the advantages of mixing the raid engine and the filesystem together like ZFS also does - if you need to repair due to a failed disk, you only process the actual data that was stored on it, not the entire raw capacity. And if you want to restripe your data to a different raid level, you also only process the actual data that you are converting, and you can use the remaining free space on the disks to do it in, so with the copy-on-write your really making a new copy of the data at the new raid level before marking the old data location as unused. A far safer restripe procedure than eg. mdadm's raid restriping abilities.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I hope you don't mind me relying a bit on your experience and judgement because I'm at work at the moment so can't do the proper research, but also don't want to miss this deal if you think it's a no-brainer.
I think the Intel DC S3700 drives have super low failure rates in the field. I have heard sub 0.5% being thrown around.

Actually 0.44% AFR had this footnote in the presentation Intel sent us earlier in 2016:
Annual Failure Rate. Source -Intel. Intel SSD Annualized Fail Rate Report for all of 2015. Intel® SSD DC S3500, S3700, P3700.
I also have seen Intel state AFR of sub 0.2% for their enterprise SSDs.
 
Last edited:
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
I also have seen Intel state AFR of sub 0.2% for their enterprise SSDs.
With failure rates that low, is RAID on my OS drives pointless? Maybe I should just run one OS drive, given the low chance of failure and that it's only for a home environment.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
With failure rates that low, is RAID on my OS drives pointless? Maybe I should just run one OS drive, given the low chance of failure and that it's only for a home environment.
Pointless is a very strong word - I think it's fairer to say that "more reliable storage devices changes the balance for the cost/benefit/complexity analysis I'm doing for speccing this system" :) Remember that RAID only really buys you extra uptime in the event of device failure and its up to your priorities and budget as to whether that's needed.

IMHO it's always better to spend the money on better backups before you splurge on RAID, although it depends if you need to make sure the SWMBO has access to their shows - I certainly wouldn't want to have to wait a week for a replacement to arrive. Personally I've got RAID on all bits of the servers including the OS, but just a single SSD for my workstation.
 
  • Like
Reactions: el_pedr0

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Mirrors give you redundancy, true. They also increase read speed, particularly random reads. This can be a big benefit for things like VM storage that are almost all random I/O. I run even SSDs in mirrors most of the time, because I don't want to deal with down time. I can have a drive die and even if I have to order a replacement in, the system keeps running. Like the previous post, workstations, particularly laptops, are generally single SSD, but servers get mirrors. I also have backups.

That reminds me.... I don't think my VM storage is getting backed up properly.... wanders off grumbling about his stupid server admin....
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Frustratingly I couldn't get all my ducks in a row to buy that great deal on the s3700s. (Not having a US credit card proved the sticking point, then trying to load a wallet with bitcoins took too long and they got removed from my cart :( ).

However, I have seen S3700 400GB ones for approx $160 here in Europe. The only thing is that they are 1.8" micro sata SSDSC1NA400G3, so I would have to buy adapters and enclosures so I could mount them in my Fractal Design R5 case which takes 2.5" Sata.

Would adding an adapter introduce any significant risk of possible failure or data loss in my system?

Edit: Also, the item description for the 1.8" 400GB drives says that the usage counter is at 100%. Is that really good, or really bad?
 
Last edited:

aero

Active Member
Apr 27, 2016
346
86
28
54
You should probably get clarification from the seller, but the value is probably referring to smart attribute 233, Media Wearout Indicator. 100 would mean it has 100% of life remaining.
 
  • Like
Reactions: el_pedr0

el_pedr0

Member
Sep 6, 2016
44
1
8
47
I managed to sort out some bitcoins and get a couple of those s3700 drives! Great.

You've convinced me to go for enterprise grade OS drives in Raid 1 config. You've also convinced me to drop my plans for BTRFS (for now) and go for mirrored pools on ZFS.

So what with all the drives and raid 1, I figure I'm now something like $600 worse off and 3TB capacity down! ;)

But, more seriously, really looking forward to getting this thing set up when my mother in law comes back with my SSDs from the US in a couple of weeks. I'll probably be back with tons of questions regarding Proxmox, VMs, memory allocation, backup methods...

Thanks for your help so far.

Any instant advice that pops into your head before I get going (even the blindingly obvious)?
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Urm.. get ecc if you can afford it. (not listed above in your posts)

Mirrored pools make a lot of sense if you have the intention of adding drives.

Re Backups, usually things fall into one of 4 categories:
1. Not super critical if you lose
2. Mildly annoying but recoverable/rebuildable if you lose
3. Pain in the butt to lose, but not critical
4. Absolutely cannot lose.

Back up 4 first, then worry about the rest. Raid 1 in a spare machine (ideally zfs) and cloud backup is "ideal".

Might be worth having a separate ZFS pool for your "Level 4" stuff which is raid 1 across 3 disks (hate for a rebuild to take out the last working disk in the pair). S3700s or other SSDs are ideal.
 
  • Like
Reactions: el_pedr0

vl1969

Active Member
Feb 5, 2014
634
76
28
is the a status update on this build :) ?

I am curious on how it all stacked up in the end.

I am currently running na OMV file server but want to move to Proxmox as host to ahve VM and file server along some other VMs.

my problem is that I am not sure how to get a good setup and easy disk management that I have with OMV using Proxmox. I know that VE 4.3 and up have built in SMART support so I can have disk health monitoring on the host, but how do I manage the data and shares properly and easily, preferably with WebUI?

PS>> please understand that ZFS is not a good option for me as I have a mix of drives of different format etc. the system will be running on RAID-1 ZFS pool, using 2c120GB SSD, as it is supported by Proxmox installer now, but data drives will probably be BTRFS raid-1 over multiple mixed devices.
I am thinking I will also have a RAID-1 ZFS pool of 2x2TB drives for local storage, to host my VM and VM related data like ISO images and VM images and drives etc, I know I am loosing some space and maybe speed but at the moment I am OK with it. it is a low demand home system
 
Last edited:

el_pedr0

Member
Sep 6, 2016
44
1
8
47
Sure. But two disclosures upfront: 1) I went for ZFS in the end (choosing to spend more money, on the hope of less frustration/time in the long run). 2) I'm still very much learning.

Proxmox is great. Very easy to create new VMs, very easy to monitor resource usage (both of the server and each of the VMs), very easy to add memory to a VM. The only hypervisor I'd previously ever used was virtualbox from the command line on very old hardware, so the difference in experiences is like night and day.

ZFS is brilliantly simple (at least for my simple setup). Particularly as I've gone for RAID 1 mirrors for my OS & VMs and all of my data, which does away with much of the headaches of how to expand the vdevs/pools - albeit at much greater expense. So far it seems like it's going to be a breeze to manage the disks. The only major gripe is that Proxmox limits you to 9 bind mounts per VM. This can be very limiting if you've got a bunch of ZFS datasets - even if they are nested because Proxmox doesn't traverse the datasets in a bind mount (i.e. if you mount a parent dataset, you don't get access to child datasets, unless you bind mount them too). So I ended up setting up a Samba server on the host, kinda marring the beauty of having just proxmox on the host and nothing else. For many of my VMs though, I don't need many datasets, so can just use the bind mounts and not the Samba server. Perhaps this is one area where going for ZFS has saved me a bit of headache, given that ZFS datasets are all easily accessible within Proxmox - maybe that's one of the benefits that comes from Proxmox support for ZFS, I'm not sure.

Generally I'm loving it - though the MB died within 6 months, and so the server has been down for the past month while Supermicro send me another (in the post today, reportedly :) ). I'm under the impression that I'll just plug everything back into the new MB and it will startup as if nothing ever went wrong - particularly if I plug the HDDs into the same SATA port numbers as they were in before (again, maybe this is another benefit of ZFS?).

I didn't want to go down the ZFS route, because it added around £300-400 to my build on additional HDDs, and will make future disk space twice as expensive. But it's not too bad given that 4TB WD reds are sometimes available at £110 and normally at £135 ish. Overall, I'm telling myself I will make it back in time and worry saved.

No matter how much I didn't want to believe it, my research into BTRFS always ended up hitting the buffers, either because of less support (proxmox being one example) but mainly because of the warnings that it's just not so battle hardened.