[Feedback wanted] OS for my AIO Homeserver [SOLVED] recovery of a crahed XPEnology Volume

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kristian

Active Member
Jun 1, 2013
347
84
28
I am on the brink of insanity:

I have the following build:
Supermicro A1SAM 2750 (So no VT-d)
32 GB RAM
LSI 9211-8i in IT Mode
Intel X520 10GBe NIC
SC 846 with SAS3 Backplane

12x 4 TB mostly WDRed
8x 3 TB mostly WDRed

After disassembling my XPEnology Server I was thinking to
- install windows 10
- install XPEnology as a HyperV guest
and enjoy life.

That failed.
Because I didn't consider that XPEnology can by default just manage 12 Disks both of my volumes are gone.
Data rescue will be another chapter of this sad story (yeah I had no freaking backup, because I couldn't afford the disks)

So while this is of course my own stupidity I am somehow hoping to recover some of the files.
But that's another story.

What I am asking this:
If I would like to start from zero, what OS should I use.

I would like the following:
Performance of minimum 400 MB/s via the 10GBe pipe (XPEnology native managed 700 MB/s)
OS or VM that holds the Files should be accessible via web gui
GUI would be nice, because I am no cmd line type
should correct up to 2 disk errors (If more than 2 disks fail it would be nice to recover some of the data)
Should have plugins like Plex and dropbox or owncloud
or should run as a HyperV guest with goog performance so the plugins could be deployed in other vms

Is there anything like that?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
How about Open Media Vault? It's Debian based (stable) comes with a web GUI and supports a number of RAID and Snapshot RAID solutions, including mdadm that XPenology uses for its RAID.

I didn't know about the 12 disk limit. That sucks. That being said, for recovery, I'd suggest hooking the disks up to a machine running a live Linux CD and take a look at the array with mdadm to recover the array. If you need help with managing mdadm from the command line, please let me know.
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
How about Open Media Vault? It's Debian based (stable) comes with a web GUI and supports a number of RAID and Snapshot RAID solutions, including mdadm that XPenology uses for its RAID.
I didn't know about the 12 disk limit. That sucks. That being said, for recovery, I'd suggest hooking the disks up to a machine running a live Linux CD and take a look at the array with mdadm to recover the array. If you need help with managing mdadm from the command line, please let me know.
I liked OMV a lot when looking at it.
I wasn't quite shure if I could easily install the plugins that I would need for convenience.

I am guessing OMV as a HyperV guest with disks passed through isn't the best idea is it?

The 12 disk limit isn't really a "hard limit" you can change that via ssh into the XPEnology and change some values in the synoinfo.cfg (If you are interested I can go more into detail in a PM)
Thing is: I forgot to do that before starting the maschine with 20 Disks.

So XPEnology noticed that of each volume there were more than 2 disks missing (SHR2).
So I changed the maximum disks to 36 and tried again.
But it seems the disks that were not "found / enumerated" when booting for the first time have been kind of kicked out of the raid.

So that's what I am trying to recover from.
Thank you for your suggestion with mdadm.
Your offer is greatly appreciated.
I think I will have most definitely ask for the offered help when I try to do that because I have never used Linux (if you leave aside the XPEnology)
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I liked OMV a lot when looking at it.
I wasn't quite shure if I could easily install the plugins that I would need for convenience.

I am guessing OMV as a HyperV guest with disks passed through isn't the best idea is it?

The 12 disk limit isn't really a "hard limit" you can change that via ssh into the XPEnology and change some values in the synoinfo.cfg (If you are interested I can go more into detail in a PM)
Thing is: I forgot to do that before starting the maschine with 20 Disks.

So XPEnology noticed that of each volume there were more than 2 disks missing (SHR2).
So I changed the maximum disks to 36 and tried again.
But it seems the disks that were not "found / enumerated" when booting for the first time have been kind of kicked out of the raid.

So that's what I am trying to recover from.
Thank you for your suggestion with mdadm.
Your offer is greatly appreciated.
I think I will have most definitely ask for the offered help when I try to do that because I have never used Linux (if you leave aside the XPEnology)
Have you tried to SSH into the XPenelogy box and look at these values to see what think thinks is going on? This may be as simple as re-assembling the array via the CLI.
Code:
cat /proc/mdstat
mdadm --detail --scan
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
cat /proc/mdstat
mdadm --detail --scan
Thanks a lot. As soon as I will get home I will give it a try.
Unfortunately I think I have to solve another problem first, because in the morning when I gave it a last try XPEnology wasn't even booting up any longer.

While that is a problem that is likely to be solved with a clean install on a single spare disk, I don't know what happens if I start XPEnology and then insert the disks one by one.

Will that cause more problems?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
It's using plain jane mdadm RAID, so any linux OS should be able to detect the RAID signature on the discs and re-assemble the array once all discs are present or you can give it a prod with mdadm --assemble --scan. Unless xpenology does something different/weird.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I know a lot of us here are running or plan to run Napp-IT within a VMWARE VM, and @gea provides an image ready to go.

I too have OMV to mess with as my friends use it and like it.

Try both :)
 
  • Like
Reactions: rubylaser

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I know a lot of us here are running or plan to run Napp-IT within a VMWARE VM, and @gea provides an image ready to go.

I too have OMV to mess with as my friends use it and like it.

Try both :)
Gea's OmniOS + napp-it VMWare appliance works great, but he will need enough extra storage space to migrate his data to if he moves away from mdadm.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I've always just used VMware... from there I can stack up all the VM's I want for any plugin (like plex)

I normally setup my home AIO with a windows file server a virtual 10GBE that looped back to the other VM's
With hyper-v built into windows 8(and 10) vmware lost a lot of luster when i set up my home server, I just rin on 8.1 currently with different applications constrained to their respective os via hyper-v(most are home brew through suse studio which is fantastic) I really do like the backup options of hyper-v compared to vmware I use a single line script to do it "Get-VM | Export-VM –Path x:\" after which syncback moves the files up to cloud storage and offsite locations
 
  • Like
Reactions: Mark Bradley

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I was under the impression had had 0 data due to failure, but why would RAID Z2 require more disks than RAID6 to migrate the data, wouldn't they all require the same if that was the plan? (Sorry if I'm misunderstanding the post / or you here @rubylaser )
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
He hasn't tried to recover yet. Mdadm is very resilient, so I would imagine that the data can be recovered. If the data is recoverable, he would need a new crop of disks to copy the data from the mdadm array to the ZFS array.

If the data is not recoverable, he could obviously just reuse his current disks and not worry about data migration.
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
Have you tried to SSH into the XPenelogy box and look at these values to see what think thinks is going on? This may be as simple as re-assembling the array via the CLI.
Code:
cat /proc/mdstat
mdadm --detail --scan
that is the output.
I think I am doing something wrong

Code:
DS-P9A> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sda2[15]
  2097088 blocks [24/1] [_______________U________]

md0 : active raid1 sda1[4]
  2490176 blocks [12/1] [____U_______]

unused devices: <none>
DS-P9A> mdadm --detail --scan
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1: No such file or directory
DS-P9A>
DS-P9A>
And I have indeed no second set of 20 disks...
So I really hope to recover the data if thats possible
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Are all your disks showing up in fdisk-l? If they do, you can check for superblocks like this
mdadm -E /dev/sd[b-t]1. If you can provide the output of fdisk -l, I can give you more exact commands to run.
 
  • Like
Reactions: Kristian

Kristian

Active Member
Jun 1, 2013
347
84
28
@rubylaser: Here we go:

Code:
DS-P9A> fdisk -l
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdb: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdb1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdd: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdd1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sde: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sde1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdg: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdg1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdh: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdh1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdk: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdk1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdj: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdj1  1  267350  2147483647+ ee EFI GPT

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sda1  1  311  2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sda2  311  572  2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sda3  588  121589  971940528  fd Linux raid autodetect
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdf: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdf1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdi: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdi1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdc: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdc1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdl: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdl1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdm: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdm1  1  267350  2147483647+ ee EFI GPT

Disk /dev/sdu: 7866 MB, 7866580992 bytes
4 heads, 32 sectors/track, 120034 cylinders
Units = cylinders of 128 * 512 = 65536 bytes

  Device Boot  Start  End  Blocks  Id System
/dev/sdu1  *  1  256  16352+  e Win95 FAT16 (LBA)
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Sorry for double post:
In the XPEnology GUI it now says that it hast found 10/12 Disks
The "missing" 2 are there, but shown as not part of the volume (so it thinks they haven't been in the raid)

What makes me really nervous is that one of the 10 disks is showing a smart error :-/
So I would be good for a rebuild, but it seems very risky.

It would be really great if we could tell xoenology to just "force" the 2 disks into the raid

Finally "filesystem errors" were found and it wants me to reboot.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
What did you do to get your array to assemble? Also, is your array and data available right now, just in a degraded state?

cat /proc/mdstat

And, can I get the output of
mdadm -E /dev/sd[a-u]1

And, you can force your array together, but it would be nice to see the event counters for each disk via mdadm -E before proceeding.
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Well actually I can't really tell...
I started xpenology to ssh into it and give you the output.
During that process we had a power outage and the board restarted.
When I connected to xpenology gui again the volume was in degraded mode.

So I thought: Well lets fix the filesystemerrors while rubylaser checks the fdisk output.
So I rebooted and XPEnology didn't restart properly
 

Kristian

Active Member
Jun 1, 2013
347
84
28
@rubylaser
Got it working again, but now all seems lost

Volume is no longer degraded but crashed :-((((

Here the raw output just in case it still can be rescued

I created a spreadsheet with the mdadm -E results (just for the ease of use, raw output included in the forum posts)
Dropbox - DataRecovery.pdf (it can be zoomed in ;-))

As I am a novice to that I can't say anything.
I noticed that there was a result for sdm1 (which fdisk did not bring up, and I have only 12 4TB disks and this would be the 13th, so I can't explain that)
This sdm1 has a different event count that all other 12 have.
Besides that: the checksums are all different but found correct and the Update time of all disks differs.

So be my hero of the year and tell me how to rescue my data

there could be a problem with sda due to
Partition 1 does not end on cylinder boundary
Partition 2 does not end on cylinder boundary


Code:
DS-P9A> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid6 sde5[1] sdj5[8] sdk5[9] sdb5[10] sdc5[11] sdh5[4] sdi5[5] sdl5[6] sdf5[7] sdd5[3] sdg5[2]
  39021872640 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/11] [_UUUUUUUUUUU]
 
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6] sdh2[7] sdi2[8] sdj2[9] sdk2[10] sdl2[11]
  2097088 blocks [12/12] [UUUUUUUUUUUU]
 
md0 : active raid1 sda1[4] sdb1[0] sdc1[11] sdd1[10] sde1[9] sdf1[8] sdg1[7] sdh1[6] sdi1[5] sdj1[3] sdk1[2] sdl1[1]
  2490176 blocks [12/12] [UUUUUUUUUUUU]
 
unused devices: <none>
DS-P9A> mdadm --detail --scan
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1: No such file or directory
ARRAY /dev/md4 metadata=1.2 name=DS-P9A:4 UUID=d2c3cfb9:d8fae794:4b44eb81:6e5d6600
DS-P9A>
Code:
DS-P9A> fdisk -l
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdb: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdb1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sde: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sde1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdd: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdd1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
 
Disk /dev/sdh: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdh1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdg: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdg1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdj: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdj1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdk: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdk1  1  267350  2147483647+ ee EFI GPT
 
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sda1  1  311  2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sda2  311  572  2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sda3  588  121589  971940528  fd Linux raid autodetect
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdf: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdf1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdm: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdm1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdi: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdi1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdl: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdl1  1  267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them
 
Disk /dev/sdc: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdc1  1  267350  2147483647+ ee EFI GPT
 
Disk /dev/sdu: 7866 MB, 7866580992 bytes
4 heads, 32 sectors/track, 120034 cylinders
Units = cylinders of 128 * 512 = 65536 bytes
 
  Device Boot  Start  End  Blocks  Id System
/dev/sdu1  *  1  256  16352+  e Win95 FAT16 (LBA)
DS-P9A>
 
Last edited: