btrfs: sector size and other musings

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J--

Active Member
Aug 13, 2016
202
53
28
41
I'm running btrfs in a RAID10, I chose it over a zfs (which I'm also running concurrently) because of it's exapandability, but I'm sort of rethinking my position.

I have several issues currently, the drives are currently a mismatch sector size, which I don't know why it would do because I created the array using a drive-pair that was 4KB/sector, and later added on a drive with 512b physicals, but why would it allow it to stay 512b?

How do I even change sector size? I know on zfs you can use ashift=12, but I don't see how on btrfs? Also why does it default to 512b.


Second, I thought RAID10 would allow me to read from 4 drives at once on sequential transfers (and any transfers, really), yet I only see reads from two drives in iostat, and also on drive activity lights.

What gives?


sdb, sdc, sde, and sdf make up the source BTRFS array (2TB spinners), sda and sdd are the targets (2x800GB SSDs in a ZFS mirror).

iostat -m:

Code:
Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdb               0.00         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sde             181.00       181.25         0.00        181          0
sdf             181.00       181.25         0.00        181          0
sdg              46.00         0.00         0.62          0          0
sdh              48.00         0.00         0.62          0          0
zd0               0.00         0.00         0.00          0          0
sda            3016.00         0.00       374.19          0        374
sdd            3000.00         0.00       372.30          0        372
sdi               0.00         0.00         0.00          0          0
btrfs fi usage:

Code:
Overall:
    Device size:                   7.28TiB
    Device allocated:              4.21TiB
    Device unallocated:            3.06TiB
    Device missing:                  0.00B
    Used:                          4.21TiB
    Free (estimated):              1.53TiB      (min: 1.53TiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID10: Size:2.10TiB, Used:2.10TiB
   /dev/sdb      538.50GiB
   /dev/sdc      538.50GiB
   /dev/sde      538.50GiB
   /dev/sdf      538.50GiB

Metadata,RAID10: Size:4.00GiB, Used:2.30GiB
   /dev/sdb        1.00GiB
   /dev/sdc        1.00GiB
   /dev/sde        1.00GiB
   /dev/sdf        1.00GiB

System,RAID10: Size:64.00MiB, Used:256.00KiB
   /dev/sdb       16.00MiB
   /dev/sdc       16.00MiB
   /dev/sde       16.00MiB
   /dev/sdf       16.00MiB

Unallocated:
   /dev/sdb        1.29TiB
   /dev/sdc        1.29TiB
   /dev/sde        1.29TiB
   /dev/sdf        1.29TiB
 
  • Like
Reactions: Patrick

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
@J-- stories like this keep me with zfs. Thanks for sharing. I'd switch to all 4K or 512e drives if I were stuck like that.
 

J--

Active Member
Aug 13, 2016
202
53
28
41
I'm ditching this for now, maybe I'll revisit in another 5 years since I like some concepts, but at this time, it still seems half baked.

I'm trying to repurpose drives two at a time to put back into my zpool, and removing a single 2TB drive in a four drive RAID10 array is literally a 2 day long process. RAID10 won't let you go below four drives (naturally), but converting to RAID1 has taken more than half the day (had to run it twice for some reason because I got a straggler RAID0 array from the conversion), and then deleting the device from the array has taken about 12 hours so far.

Clearly the delete and conversion functionality didn't have performance in mind, and what bugs me more is that the drives are read/writing at 20 MB/s, with 1% CPU usage. So what is it waiting for?!?!