Linux Software RAID Guide

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Anyone have a good guide handy? With Ubuntu 12.04 I want to setup linux software RAID for my media server and web hosting.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Does this work with solid state? Can you have a mix and cache?
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Was asking if the linux software raid non-ZFS does L2ARC like caching. If not, is there a plan to support?
 

john4200

New Member
Jan 1, 2011
152
0
0
Was asking if the linux software raid non-ZFS does L2ARC like caching. If not, is there a plan to support?
What is "non-ZFS"?

L2ARC is a read cache, I believe, typically on an SSD to cache data from HDDs. The OP was talking about a media server, so there would be little benefit to such a cache. I think such a cache would be useful for certain database and heavy-traffic webserver applications, but not for a media server.

By the way, linux will automatically use available RAM as a read cache ("buffer cache"). Given that RAM is quite cheap nowadays, I think I would prefer a RAM cache over an SSD cache.
 

ricktsd

New Member
May 30, 2015
10
1
3
43
so, im on the verge of buying a 36 bay chassis, and have a question about raid arrays.

is it possible to have 12 x 4TB in raid6 as array 1 /dev/md0

then have 12 x 5TB as raid6 /dev/md1

then have 12 x 6TB as raid6 /dev/md2

then create an LVM to see all three arrays as a single volume, so that i looks like i have a single 150TB drive?

if so, anyone know how to do it, or point me to a guide for it? ive been googling and cant find anything about this.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Very easy to do. Create your three arrays with mdadm, turn each array into an LVM physical volume (pvcreate /dev/mdx), make a new LVM volume group to contain them (vgcreate BigVolumeGroup /dev/mdx /dev/mdy /dev/mdz), and then create a new LVM logical volume from that group (lvcreate -l 100%FREE -n BigDeviceName BigVolumeGroup). Then back to the normal steps dealing with any block device - format it with whatever filesystem you like (ensure it supports that capacity) and add an fstab entry so that it is automatically mounted at boot.

FYI, you can also grow/shrink logical volumes easily (at least if the filesystem on top also supports it), and add/remove physical volumes from a volume group easily. Assuming have enough free space in the filesystem, you could shrink the filesystem by 40TB, shrink the logical volume, remove the 12x4TB array from the volume group, add a new 12x8TB physical volume, and then grow the logical volume + filesystem again, and do it all online (depending on filesystem - most support online growth only a few support online shrink)
 
  • Like
Reactions: ricktsd

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Was asking if the linux software raid non-ZFS does L2ARC like caching. If not, is there a plan to support?
mdadm doesn't have an equivalent built-in caching mechanism, but you could always use bcache or flashcache on the md device.

As said above, if this is truly a media server, I'd suggest looking at SnapRAID. I use SnapRAID + AUFS for my large bulk storage, and ZFS for my other things that need speed, and reliability (VM storage is one thing).
 

OBasel

Active Member
Dec 28, 2010
494
62
28
I use SnapRAID + AUFS for my large bulk storage, and ZFS for my other things that need speed, and reliability (VM storage is one thing).
The chance of AUFS getting upstreamed is pretty low I thought? I'd probably btrfs before AUFS + SnapRAID TBH.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
AUFS is continuing to be active developed and is included in all current stable distributions (AUFS 4 branch was recently released for Linux 4+ kernels). Btrfs raid1 is solid, but for bulk media storage, wastes way too much space for my liking. And, Btrfs' recent inclusion of raid5/6 is a no go for me at this point. I would like to see it be proven for a couple of years before I trust my data to anything more than the current raid1.

With SnapRAID's up to 6 parity disks, bitrot protection, each device containing a standalone filesystem, and the ability to only spin up one disk for a read, it's perfectly suited for storing things like movies and tv shows when coupled with a pooling solution (not really needed with XBMC or Plex as they both allow setting up multiple paths for content types).
 
Last edited:

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
AUFS is continuing to be active developed and is included in all current stable distributions (AUFS 4 branch was recently released for Linux 4+ kernels).
Nope - it it included in many/most debian/ubuntu based distributions, but is neither available or easily added to Fedora, RHEL, CentOS, etc. where you must use a 3rd-party kernel (or compile your own) to get AUFS support. Not available in Gentoo either without some extra work which is what I'm running on my snapraid box (I use mhddfs for pooling).

And @OBasel is right about it having a low chance of being upstreamed. Especially since OverlayFS was merged into the kernel in 3.18 - if I was building a new snapraid system now I would be looking at OverlayFS.
 

Martin Jørgensen

New Member
Jun 3, 2015
28
6
3
39
Hey

I just got my new server build, and i have problems.
When i reboot my server my new md0 is gone.

I have one SSD for system Ubuntu 14.04.2
2 x 4tb disk for raid 1

What i have done.
sudo mdadm --create /dev/md0 --chunk=4 --level=1 --raid-devices=2 /dev/sdd /dev/sde

And let the system sync, after 7hours i have run these commands.
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -k all -u


When i reboot the machine md0 is gone, cant find any raid with lsblk
Im not new a Linux, have used it for years. Whats am i missing, do i have to wait 7 hours for raid to sync every time

My plan is to encrypt the partions, but now i need to make the raid works first
Encrypted partition on a Soft RAID device using mdadm and cryptsetup(LUKS) | Matthias A Lee | Life in the Digital Era

My new motherboard is X10Slm
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I'm not an Ubuntu user at all so this is just a guess - but it sounds to me like you're probably just missing something to re-start the array at boot, and need to enable some service or another. You can try running 'mdadm --assemble --scan' and see if it puts the array back together for you - if it does then you just need to enable the appropriate script.

Also FYI, instead of using lsblk you're better off running 'cat /proc/mdstat' to see the status of your array. Especially if the array has a problem and is not online lsblk won't see anything, but mdstat will still show as much as the kernel can figure out about the array.
 

Martin Jørgensen

New Member
Jun 3, 2015
28
6
3
39
Hey thanks for your repley

'cat /proc/mdstat says noting after reboot
unused devices: <none>

Before reboot its was completing done sync.

Lsblk shows md0 under each disk before reboot.

mdadm --assemble --scan dont have anything