GlusterFS and "RAID10" sanity check

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ArmedAviator

Member
May 16, 2020
91
56
18
Kansas
FWIW, this is for a regularly backed up, home media setup.

I've been running Gluster on two Supermicro 2U X8DTN+ servers with a hodgepodge of 6TB and 8TB SATA and SAS drives and one arbiter server on a Proxmox VM. The servers have been setup with Btrfs RAID5 and have had great success thus far. The Btrfs arrays survived 2 total (seperate) HDD failures and a seperate incident with one drive failing with multiple failing sectors with zero data loss on the Btrfs arrays. This leaves me to take a little bit more risk and leave the redundancy to Gluster while gaining performance from the backing store (Btrfs in RAID0 instead of RAID5).

I have been considering transitioning to a setup with Btrfs RAID0 for more throughput and incredibly faster Btrfs scrubs. I propose the following:
Server 1:
bricka1 - Btrfs m=raid1 d=raid0 - 16TB
brickb1 - Btrfs m=raid1 d=raid0 - 16TB

Server 2:
bricka2 - Btrfs m=raid1 d=raid0 - 16TB
brickb2 - Btrfs m=raid1 d=raid0 - 16TB

Server 3: (arbiter VM) (Host: R710 w/ H700 RAID10 SAS, XFS)
arbitera - XFS - 2GB shared
arbiterb - XFS - 2GB shared

I believe what I'd want to do would be:

Code:
volume create replica 3 arbiter 1 myvolume \
server1:/bricks/bricka1 server2:/bricks/bricka2 server3:/bricks/arbitera \
server1:bricks/brickb1 server2:/bricks/brickb2 server3:/bricks/arbiterb
From my limited understanding, I'd end up with data always replicated between the two servers with no single point of failure. Can anyone confirm this theory?
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
How about replicated mode with sharding and 1 brick == 1 drive? Something like (assuming 4 drives per server):
Code:
gluster volume create volname replica 3 arbiter 1 \
s1:/b1 s2:/b1 a:/a1 \
s1:/b2 s2:/b2 a:/a2 \
s1:/b3 s2:/b3 a:/a3 \
s1:/b4 s2:/b4 a:/a4

gluster volume set volname features.shard enable
gluster volume set volname features.shard-block-size 512MB # or smaller
5.5. Creating Replicated Volumes Red Hat Gluster Storage 3.5 | Red Hat Customer Portal
 

ArmedAviator

Member
May 16, 2020
91
56
18
Kansas
What if all of my disks vary in size? I have 4TB, 6TB, and 8TB drives. If I make sure each mirrored brick is of identical size, will that suffice, even if bricks on the same server are of varying size?

To set this up, I'd have to destroy my 24TB of Gluster data and restore from a backup server so I'm hesitant to just go try.