[Feedback wanted] OS for my AIO Homeserver [SOLVED] recovery of a crahed XPEnology Volume

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
DS-P9A> mdadm -E /dev/sda1
/dev/sda1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:32:32 2015
  State : clean
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7d51 - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  4  8  1  4  active sync  /dev/sda1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>

DS-P9A> mdadm -E /dev/sdb1
/dev/sdb1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:33:15 2015
  State : clean
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7d84 - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  0  8  17  0  active sync  /dev/sdb1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A> mdadm -E /dev/sdc1
/dev/sdc1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:33:43 2015
  State : clean
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7dc6 - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  11  8  33  11  active sync  /dev/sdc1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A> mdadm -E /dev/sdd1
/dev/sdd1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:34:03 2015
  State : clean
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7de8 - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  10  8  49  10  active sync  /dev/sdd1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A> mdadm -E /dev/sde1
/dev/sde1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:34:23 2015
  State : clean
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7e0a - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  9  8  65  9  active sync  /dev/sde1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
DS-P9A> mdadm -E /dev/sdf1
/dev/sdf1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:34:43 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7e2c - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  8  8  81  8  active sync  /dev/sdf1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A> mdadm -E /dev/sdg1
/dev/sdg1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:35:03 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7e4e - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  7  8  97  7  active sync  /dev/sdg1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A> mdadm -E /dev/sdh1
/dev/sdh1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:35:18 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7e6b - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  6  8  113  6  active sync  /dev/sdh1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A> mdadm -E /dev/sdi1
/dev/sdi1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:35:39 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7e8e - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  5  8  129  5  active sync  /dev/sdi1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A> mdadm -E /dev/sdj1
/dev/sdj1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:35:58 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7ead - correct
  Events : 3987

  Number  Major  Minor  RaidDevice State
this  3  8  145  3  active sync  /dev/sdj1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A> mdadm -E /dev/sdk1
/dev/sdk1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:36:20 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7ed1 - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  2  8  161  2  active sync  /dev/sdk1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A>
DS-P9A> mdadm -E /dev/sdl1
/dev/sdl1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

  Update Time : Thu Aug  6 23:36:36 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee7eef - correct
  Events : 3987


  Number  Major  Minor  RaidDevice State
this  1  8  177  1  active sync  /dev/sdl1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  177  1  active sync  /dev/sdl1
  2  2  8  161  2  active sync  /dev/sdk1
  3  3  8  145  3  active sync  /dev/sdj1
  4  4  8  1  4  active sync  /dev/sda1
  5  5  8  129  5  active sync  /dev/sdi1
  6  6  8  113  6  active sync  /dev/sdh1
  7  7  8  97  7  active sync  /dev/sdg1
  8  8  8  81  8  active sync  /dev/sdf1
  9  9  8  65  9  active sync  /dev/sde1
  10  10  8  49  10  active sync  /dev/sdd1
  11  11  8  33  11  active sync  /dev/sdc1
DS-P9A>
DS-P9A> mdadm -E /dev/sdm1
/dev/sdm1:
  Magic : a92b4efc
  Version : 0.90.00
  UUID : 21d435fd:981a079c:3017a5a8:c86610be
  Creation Time : Sat Jan  1 01:00:03 2000
  Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
  Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0
  Update Time : Thu Aug  6 22:39:43 2015
  State : clean
Active Devices : 12
Working Devices : 12
Failed Devices : 0
  Spare Devices : 0
  Checksum : e9ee6bbe - correct
  Events : 3133


  Number  Major  Minor  RaidDevice State
this  1  8  193  1  active sync  /dev/sdm1

  0  0  8  17  0  active sync  /dev/sdb1
  1  1  8  193  1  active sync  /dev/sdm1
  2  2  8  177  2  active sync  /dev/sdl1
  3  3  8  161  3  active sync  /dev/sdk1
  4  4  8  33  4  active sync  /dev/sdc1
  5  5  8  145  5  active sync  /dev/sdj1
  6  6  8  129  6  active sync  /dev/sdi1
  7  7  8  113  7  active sync  /dev/sdh1
  8  8  8  97  8  active sync  /dev/sdg1
  9  9  8  81  9  active sync  /dev/sdf1
  10  10  8  65  10  active sync  /dev/sde1
  11  11  8  49  11  active sync  /dev/sdd1
DS-P9A>
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
512
113
One of the reasons I always like to ensure the OS is installed on a separate device to the main data array is when you have problems with booting...!

FYI since you're using discs bigger than 2.2TB, and hence GPT partitions, you should use gdisk instead of fdisk to get a proper report of the partition layout.

From your /proc/mdstat it looks like you've got a nice fat RAID6 of 12 3TB discs, only one of which is down, and the array appears to be up and running. Can you mount /dev/md4 manually? e.g. `mount /dev/md4 /path/to/some/dir` ? I'm not familiar with xpenology's internals so not sure where it would try to mount this stuff by default but it doesn't look too poorly from the CLI...
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
Actually XPEnology has a bootloader on a USB Stick and the OS on I think every disk in a own partition.
But as the master of disaster that does not prevent me from messing it up^^

I will try to post the gdisk results as soon as I am home (that is if the little ones are still having a nap)

actually the raid consists of 12x 4TB disks

When it comes to mounting the array manually I am too much of a Linux noob to answer that.
Of course I could ssh into the box and mount /dev/md4/share
But I don't know if that would work with just "share" and I don't know how to access the share then.

So I should probably do that in a Linux with gui.
But that would vreate a new problem scenario.
Lets say I manage to mount the array in Ubuntu.
Where would I go with my files.
I don't have enough backupspace and cant afford any more hdds for the rest of the year.

So the greatest thing would be to migrate the array into OMV or have it running in XPEnology again.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
512
113
Aye, I saw there was a USB disc in your output but wasn't sure if that was being used or not. What are the md0 and md1 RAID1 arrays for? I know on my old QNAP that it created these RAID1 partitions at the start of the drives to run most of the OS on.

Anyway, probably not important. Next best step at this point IMHO would be to SSH on to the box and see if you can get the array mounted to see if the filesystem is intact - you'll be able to see the files on the command line just fine I expect. Next step after that would probably require input from someone more versed in xpenology... getting the files off via SSH or SMB is a doddle with regular linux (I've used bootable linux distros such as SystemRescueCD to recover data from people's busted NAS appliances many times) but if you don't have the scratch space to copy the files off so you can start again from scratch then that's problematic...
 

Kristian

Active Member
Jun 1, 2013
347
84
28
truly problematic. btw: gdisk: not found...
but well... If I come to the point that I have access to the data I will buy disks, copy data back and forth and return the disks

Will wait if @rubylaser has a idea on how to fix this.
If not I feel probably best doing it via Ubuntu
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
truly problematic. btw: gdisk: not found...
but well... If I come to the point that I have access to the data I will buy disks, copy data back and forth and return the disks

Will wait if @rubylaser has a idea on how to fix this.
If not I feel probably best doing it via Ubuntu
You are likely going to need to go the Ubuntu route. To properly try to fix this, you are going to need to assemble the arrays outside of XPenology, and then try to mount the filesystems to see if your data is intact.

FYI - You'd want to use parted to examine the GPT partitions. I'm sure that's in XPenology.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
512
113
Ubuntu or A.N. Other Linux Distro is probably going to be a simpler option, yeah - especially if you want to turn the array into a functional one that you can serve files from. One of the disadvantages of appliance-based distros like xpenology is that they frequently don't include some very useful utils (such as gdisk - much easier to use than parted IMHO). I mentioned SystemRescueCD earlier since it's a doddle to boot it off USB and includes pretty much every utility you could ever want for doing disc recovery, as well as a light graphical shell.

Yes there are dangerous, horrible things that can happen when you mount an array... but they're very rare and there are a bunch of things you can do for sanity checking beforehand. Even if you don't, 9999999999999999 times out of a thousand linux will just print an error message and stop. I've not seen damage done to a filesystem by mounting in over a decade, and if you're really worried just mount read-only.

Assuming xpenology is just a plain jane ext4 filesystem on top of an mdadm array (if it uses LVM someone please say so! :)), you can try running a few of these commands against /dev/md4:

Print what mdadm thinks of your degraded array:
Code:
mdadm -D /dev/md4
See what the block subsystem thinks about it:
Code:
blkid /dev/md4
Query filesystem info, query superblock info:
Code:
dumpe2fs /dev/md4
dumpe2fs /dev/md4 | grep -i superblock
Run a test fsck (but don't actually touch the discs):
Code:
fsck.ext4 -n /dev/md4
Done from memory so hopefully not made any mistakes... :)
 
Last edited:

Kristian

Active Member
Jun 1, 2013
347
84
28
So here comes the latest progress:


  1. Ordered a 8TB disk via express yesterday to gain some disk space. Should arrive this morning.
  2. installed Ubuntu
  3. Followed a guide from synology.com
  4. Install mdadm with the following commands.
    Ubuntu@ubuntu:~$ sudo -i
    root@ubuntu:~$ apt-get install mdadm
  5. Select No configuration and complete the installation.
  6. Install lvm2 with the following commands.
    root@ubuntu:~$ apt-get install lvm2 (otherwise vgchange won't work)
  7. Run the following command to mount all of the hard drives from your DiskStation.
    root@ubuntu:~$ mdadm -Asf && vgchange -ay
  8. This brought the 8x 3TB array online
  9. Right now I am moving files of to a bunch of disks
Next steps
After the 8tb disk arrives I will copy the last files from the 8x 3TB array.
Then I have 8x3TB space to move the files from the 12x4TB array (20TB)
Hopefully that array is as easily mounted with the above commands
 
  • Like
Reactions: rubylaser

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
So here comes the latest progress:


  1. Ordered a 8TB disk via express yesterday to gain some disk space. Should arrive this morning.
  2. installed Ubuntu
  3. Followed a guide from synology.com
  4. Install mdadm with the following commands.
    Ubuntu@ubuntu:~$ sudo -i
    root@ubuntu:~$ apt-get install mdadm
  5. Select No configuration and complete the installation.
  6. Install lvm2 with the following commands.
    root@ubuntu:~$ apt-get install lvm2 (otherwise vgchange won't work)
  7. Run the following command to mount all of the hard drives from your DiskStation.
    root@ubuntu:~$ mdadm -Asf && vgchange -ay
  8. This brought the 8x 3TB array online
  9. Right now I am moving files of to a bunch of disks
Next steps
After the 8tb disk arrives I will copy the last files from the 8x 3TB array.
Then I have 8x3TB space to move the files from the 12x4TB array (20TB)
Hopefully that array is as easily mounted with the above commands
Nice work! I'm so glad you where able to get this working.

Since you are migrating all your data, have you thought about what OS you will use to save your data in the future? Will you stick with XPenology, go with something like OMV, Linux with mdadm, or go with ZFS?
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
@whitey @rubylaser @EffrafaxOfWug: Thank you guys. Without your input and help I would have probably just started all over again.
And my wife would probably have killed me with all the pictures of us and the twins lost.

The 8tb disk has arrived and I have already emptied the 8x 3TB empty.

We will put the little ones to sleep in a bit and then I will try to mount the 12x4TB array.

Its great you are asking because I haven't had the time to make my homework concerning the OS.

It was a great time with XPEnology, but as the master of desaster that I am I am shure comes the next update or the next hardware I will again forget to set the max ammount of HDDs from 12 to 24 or what ever number of disks I have by then and end up in the same situation I am now.

So I am leaning towards trying something else.
OMV sounds like a good thing. but I couldn't test it yet (plugins, ease of use, performance) and I know not enough about it.
can you mix and match different sizes of disks?
Have a single array of 8x3TB and 12x 4TB. (probably with a filesystem that can recover from 3 lost disks?)
Would a big array like that be a very bad idea?

Well: Still plenty of time to decide (because copying 20TB will take a considerable while)

Next thing is: Trying to find a way to build a new array from the 8x3TB to copy the data from the 4TB array to have the rescue process done in a ammount of time that can be dealt with.
Still don't know how I will do that...
 

Kristian

Active Member
Jun 1, 2013
347
84
28
@rubylaser, @EffrafaxOfWug: sorry but next problem ecountered.

In the build in Ubuntu disk tool the arry is shown.
40TB Raid 6, array degraded, one Disk failed.

Unfortunatelly I cant mount the filesystem.
Or well perhaps it is mounted.
As Filesystem is shown, but I cant access it:

Error mounting /dev/dm-2 at /media/kristian/1.42.6-5565: Command-line `mount -t "ext4" -o "uhelper=udisks2,nodev,nosuid" "/dev/dm-2" "/media/kristian/1.42.6-5565"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1000-lv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

Code:
mdadm -D /dev/md127
/dev/md127:
  Version : 1.2
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 3902187264 (3721.42 GiB 3995.84 GB)
  Raid Devices : 12
  Total Devices : 11
  Persistence : Superblock is persistent

  Update Time : Sat Aug  8 13:16:40 2015
  State : clean, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
  Spare Devices : 0

  Layout : left-symmetric
  Chunk Size : 64K

  Name : DS-P9A:4
  UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Events : 45456

  Number  Major  Minor  RaidDevice State
  0  0  0  0  removed
  1  8  85  1  active sync  /dev/sdf5
  2  8  117  2  active sync  /dev/sdh5
  3  8  69  3  active sync  /dev/sde5
  7  8  101  4  active sync  /dev/sdg5
  6  8  197  5  active sync  /dev/sdm5
  5  8  149  6  active sync  /dev/sdj5
  4  8  133  7  active sync  /dev/sdi5
  11  8  53  8  active sync  /dev/sdd5
  10  8  37  9  active sync  /dev/sdc5
  9  8  181  10  active sync  /dev/sdl5
  8  8  165  11  active sync  /dev/sdk5
Code:
blkid /dev/md127
/dev/md127: UUID="5blKoe-y8B5-nu84-pwRi-lXl9-V0zE-s7fMci" TYPE="LVM2_member"
Code:
dumpe2fs /dev/md127
dumpe2fs 1.42.9 (4-Feb-2014)
dumpe2fs: Ungültige magische Zahl im Superblock beim Versuch, /dev/md127 zu öffnen
Kann keinen gültigen Dateisystem-Superblock finden.
Translation:
Uncorrect magic number found in Superblock when trying to open /dev/md127
Unable to fing valid Filesystem Superblock
 
Last edited:

Kristian

Active Member
Jun 1, 2013
347
84
28
I tried to get the ouptut from the single disks again with mdadm -E /dev/sd[a-?]1 like I did before - drives are numerated from sdc to sdn now, because sda is boot sdb is the 8TB - , but it is giving the feedback:

Code:
No md superblock detected on /dev/sdc1
after a bit of trying I got feedback with mdadm -E /dev/sdc5

But before I spam the output of all disks: Is that helpfull in any way?
Because the sx1 partition is something else as the sx5 partition isn't it?

Code:
mdadm -E /dev/sdc5
/dev/sdc5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : f8aa1491:ff2b3818:f09040de:9d83a357

  Update Time : Sat Aug  8 13:16:40 2015
  Checksum : 56aef724 - correct
  Events : 45456

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 9
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Nice work! I'm so glad you where able to get this working.

Since you are migrating all your data, have you thought about what OS you will use to save your data in the future? Will you stick with XPenology, go with something like OMV, Linux with mdadm, or go with ZFS?
ZFS...ZFS...ZFS :-D