Cant figure out best drive layout with BTRFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ch33rios

Member
Nov 29, 2016
102
6
18
43
So perhaps its my OCD but I cant quite figure out how to best layout my BTRFS setup with my various drives:

2 x HGST 2TB 7200RPM (currently RAID0 - transplants from old system)
2 x WD Red 3TB 5400 RPM (currently RAID1)
1 x Seagate 3TB 5900 RPM (transplant from an old system)

Note that I essentially am using this setup but instead of FreeNAS its a BTRFS based NAS (Rockstor...I like it!) and also I'm just doing raw device mapping of the drives as I didn't have an HBA card.

I currently have the HGST RAID0 array acting as an NFS datastore for my ESXi host and while that is working absolutely fine, I wanted to somehow figure out how I can leverage the RAID1 to at least have some sort of data 'high availability' in case the RAID0 fails.

Im sure some are thinking RAID10 but wouldn't that be an overall reduction in performance potentially? That and I would lose additional space on the 3TB RED drives since the smallest drives in the array would be 2TB? I could just create a cron job on my BTRFS host that rsync's the data from the RAID0 pool to the RAID1 pool but again, I feel like Im not doing this in the most efficient way possible.

I just cant seem to make up my mind so I figured I'd present my quandary to ya'll to get some more experienced input. I know its a sort of mish-mash of drives but this is my home lab and its what I've got available at the moment.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I think you are missing the point of how BTRFS works.
it is not raid per drive as regular raid and ZFS does, it is raid per chunk

now with normal raid you can not build out a single raid10 pool at all as it needs 4 devices of equal size, (let along equal specs).

with ZFS would be a no go as well as it needs same size device per vdev. so you can have 1vdev=2x2TB in raid1 and 1vdev=2x3TB in raid1 , raid 10 is out of the question.

with BTRFS you do not have this limitations. in your case
2x2TB = 4TB
3x3TB=9TB
for a total of 13TB of raw capacity.
raid1 or 10 splice that by half and would yield about 6.5TB of space in raid1 or raid10 config.
btrfs space allocator

in my research performance wise , BTRFS raid1 or 10 was mostly OK, not sure what is your needs / expectations are though.

what's more with BTRFS and if your drives are not too full now, you can build the raid and move to it essentially online. start with 2 3TB in raid1. move data off the third,
clear it, add to the pool, rinse ,repeat :)

once all drives are in the pool do balance convert to raid10 and done :)
 
Last edited:

Ch33rios

Member
Nov 29, 2016
102
6
18
43
I think you are missing the point of how BTRFS works.
it is not raid per drive as regular raid and ZFS does, it is raid per chunk

now with normal raid you can not build out a single raid10 pool at all as it needs 4 devices of equal size, (let along equal specs).

with ZFS would be a no go as well as it needs same size device per vdev. so you can have 1vdev=2x2TB in raid1 and 1vdev=2x3TB in raid1 , raid 10 is out of the question.

with BTRFS you do not have this limitations. in your case
2x2TB = 4TB
3x3TB=9TB
for a total of 13TB of raw capacity.
raid1 or 10 splice that by half and would yield about 6.5TB of space in raid1 or raid10 config.
btrfs space allocator

in my research performance wise , BTRFS raid1 or 10 was mostly OK, not sure what is your needs / expectations are though.
Hey thanks for the info. You're correct in your statement that Im not fully familiar with BTRFS but hey, thats why I ask these questions :)

My primary goals were really to get the benefits of increased performance of a RAID0 setup while at the same time having the data resiliency/high availability of a RAID1 setup. My 'traditional' thinking was that I need to keep like-for-like drives together but it seems that BTRFS is really meant to transcend that.

My performance right now on my D drive (a secondary VMDK stored on the NFS datastore which is the RAID0 HGST drives) in my windows machine is really good based upon Crystal Disk marks scoring...
CrystalDiskMark.PNG

If I got even close to that under a RAID10 setup that be pretty sweet for sure :) I guess there is no way to know unless I try it out!
 

vl1969

Active Member
Feb 5, 2014
634
76
28
yes, there is no way to know it.
I do not sweat much on performance of my data pool as it is a home file server with not much usage overall.
mostly it is media serving to nearest htpc :)
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
yes, there is no way to know it.
I do not sweat much on performance of my data pool as it is a home file server with not much usage overall.
mostly it is media serving to nearest htpc :)
I have most of my OS vmdk's sitting on SSD datastores in ESXi but for some of my stuff (e.g. my 'games' running in my Win10 VM) i was looking at dropping that onto the NAS datastore especially considering the read speed of that NFS datastore is so good.

Either way, I'll give it a shot!
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
So I've added the WD Reds to the RAID0 pool with the HGST's and did another test. Results of read are just about the same where-as write speed seems to have taken a small hit. Not a huge deal as performance is still very nice.

RAID0 - 4 disks.png

I'm copying my media to another location before switching the pool raid setup just to be sure :)