BTRFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

neo

Well-Known Member
Mar 18, 2015
672
363
63
I'm waiting for it to come out of beta before I touch it. Still many facets not fully implemented.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
No, I need RAID6 at a minimum and although it's implemented finally, I want to let it bake for a while before I trust my data to it. When it's tested and feature complete, its flexibility will be awesome. Right now I use SnapRAID for bulk media storage and ZFS on Linux for my VM storage or anything that I need greater than single disk speeds for. This combo has been awesome for my home use.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Also holding fire on btrfs for reasons of instability. I'd be content to even run it plain on top of mdadm and LVM but I still keep reading stories of people who encounter $situation and end up losing a filesystem whilst ext4 is exceedingly difficult to kill. Keeping a very close eye on it however and using it in several VMs.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I have been running my home server on btrfs pool for the last 2+ years, no problems, but I did lose 2 tb worth of data last week when my 2 drive pool crashed dead. It was not entirely btrfs fault as I had a ram issue and did not know untill it was too late. Also I think I found a bug in btrfs raid. When you uze raw devices
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I have been running my home server on btrfs pool for the last 2+ years, no problems, but I did lose 2 tb worth of data last week when my 2 drive pool crashed dead. It was not entirely btrfs fault as I had a ram issue and did not know untill it was too late. Also I think I found a bug in btrfs raid. When you uze raw devices
That sucks, but is a good indicator of why I won't trust my data to it yet. What was your pool configuration (RAID1/5/6)?
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I had a 2x3tb drives in raid1.
One drive got borked. My guess just a bad drive. But there is a bug in btrfs when you use raw devices. Only the first device is safe. If the first device dies all pool is dead. By first device I mean the first device in a list you use to create the pool. You know makefs.btrfs dev1 dev2 dev3 command. When you use raw, unpartitioned disks only dev1 is safe. If it goes all pool is dead. If you prepare the disks first with parted or some other tool, and create partition table and a primary partition, than use partitions to build a pool it works as expexted. And you can recover from mostly any drive failure as expected.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Is there any advantage to using raw devices rather than using partitions? I ran into a bunch of people who had broken mdadm where someone had used raw devices instead of creating a partition on them first and then at the first sign of trouble some other bright spark had the idea of "recreating" their "missing" partition table as an attempted fix.

Incidentally, the number one reason we started insisting on partitioning mdadm devices was so we could be sure the sector alignment was correct.

That reminds me again - the number one reason I avoided btrfs last time was that if a RAID array degraded, it would refuse to mount... so if you installed on a RAID1 as I'm usually wont to do, and one of your discs failed, your whole machine would be unable to boot without some rather convoluted manual intervention. Unacceptable IMHO.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I can only see one advantage in using raw devices, is that you do kot need an extra steps to prepare device for you. Just drop it in and add to pool. Since couple of years back it was one of the things zfs could do, guys from btrfs team felt like tooting this option as something big.
But like I said, there is a bug in how btrfs raid works with it. I specifically tried several times to replicate the issue on vm setup. And if you loose the drive from pool built on raw devices, if the drive is what I call master drive, ie. The first drive used in mkfs command you loose the pool. Now it does not happens when using prepaetitioned drives, where you use the partitions to make the pool.

Ps.. I did not know you can use raw devices in mdam.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Some days I think BTRFS is like chasing the end of a rainbow. It has been out there but not considered production ready for a long time.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
That reminds me again - the number one reason I avoided btrfs last time was that if a RAID array degraded, it would refuse to mount... so if you installed on a RAID1 as I'm usually wont to do, and one of your discs failed, your whole machine would be unable to boot without some rather convoluted manual intervention. Unacceptable IMHO.
Solaris used to have a similar feature in disksuite. (Way before ZFS; hilariously disk management was part of why they needed ZFS.) You could create a mirror for your root drive, but it needed >50% of the metadata copies to decide that any given configuration was valid. So if you created a two drive mirror, by default it wouldn't boot with only one working drive. This used to really confuse new admins who thought they were setting up something simple, like linux md. Good times. Common theory back then was that sun made disksuite as painful as possible to upsell veritas.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I'm using btrfs here and there, though I haven't migrated my primary bulk storage over to it yet (still on snapraid for that). The parity-raid support should be functional now, but is still missing a feature that I consider required (being able to force stripe widths) and doesn't have very much testing yet. But its getting to that point where I can see myself moving to it for my primary storage in the not-too-distant future. I've been using it in single-device mode in my desktop as well as raid-1 for a small 2-disk volume on my server that is used with docker with the btrfs storage backend and those have been working great. I've also got a 3-drive btrfs raid-5 in my desktop at work that I use for bulk storage, even though I'm running an older kernel that doesn't have raid-5 fully implemented yet (I would almost certainly not recover from a failed drive) - that data is also backed up to a file server at work so I don't depend on brtfs's parity-recovery abilities - otherwise that volume has also been running without issue for over a year now.

If you're looking to build a NAS around btrfs you could give Rockstor a try.
 

acmcool

Banned
Jun 23, 2015
610
76
28
40
Woodbury,MN
Cool...I am hoping to move one Freenas box to BTRFS soon with Backup as Freenas...
I am downloading Rockstor as we speak..Will try on ESXI...
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I have been using btrfs for the last 2 years without any major issues. Did find a bug when using raw devices though. Lost almost 2tb of media over it.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
No, until now I did not have enough empty disks for anything but raid1. I had several pools of 2 disks each in raid 1 format. I bought specifically 2 3tb drives to build a 3 tb pool in raid1 to move all data from smaller pools and than add the disks to the main one and expande the volume. When done would have converted the pool to raid 5 or 6. But the one of the 3tb drives whent cubloye taking about 2tb of media with it. Before I had a chance of wxpand the volume and convert. Could have been worth as I was not done moving all data to it. Now I am building a pool on a brand new 3tb and 3 2tb. Making sure not to use raw devices for the pool. The bug seams to affect setups using raw device to bild out a pool. When you prepare the disks with parted before hand creating partition table and partition, and than using the partitions to build the pool it works as expected.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I wonder where all these issues seemingly related to raw devices are coming from - I usually use raw devices for almost everything and have never had a problem.
 

stupidcomputers

New Member
May 27, 2013
18
19
3
Seattle, WA
www.linkedin.com
I am testing out rockstor using a raw 16TB lun formated with btrfs from my scaleio setup. I did have one kernel panic which appears to be due to network manager going crazy after 24 hours of failing to autoconfigure an unconfigured interface. After setting static IPs on all interfaces it consumed 10+TB of data over smb without issue.

Picked this solution for the scheduled snapshots on the filer and the pretty graphs.