[Feedback wanted] OS for my AIO Homeserver [SOLVED] recovery of a crahed XPEnology Volume

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
@whitey @rubylaser @EffrafaxOfWug: Thank you guys. Without your input and help I would have probably just started all over again.
And my wife would probably have killed me with all the pictures of us and the twins lost.

The 8tb disk has arrived and I have already emptied the 8x 3TB empty.

We will put the little ones to sleep in a bit and then I will try to mount the 12x4TB array.

Its great you are asking because I haven't had the time to make my homework concerning the OS.

It was a great time with XPEnology, but as the master of desaster that I am I am shure comes the next update or the next hardware I will again forget to set the max ammount of HDDs from 12 to 24 or what ever number of disks I have by then and end up in the same situation I am now.

So I am leaning towards trying something else.
OMV sounds like a good thing. but I couldn't test it yet (plugins, ease of use, performance) and I know not enough about it.
can you mix and match different sizes of disks?
Have a single array of 8x3TB and 12x 4TB. (probably with a filesystem that can recover from 3 lost disks?)
Would a big array like that be a very bad idea?

Well: Still plenty of time to decide (because copying 20TB will take a considerable while)

Next thing is: Trying to find a way to build a new array from the 8x3TB to copy the data from the 4TB array to have the rescue process done in a ammount of time that can be dealt with.
Still don't know how I will do that...
With your speed needs and everything else, I think you might need to learn a bit of the command line to really do this right. I would agree with Whitey and say ZFS is probably your best bet. I would do two raidz2 vdevs (one of the 3TB disks and one of the 4Tb disks). This would give you 18TB + 40TB = 58TB in one array of usable space and 4 disks for parity (very reliable).

I would do this with ZFS on Linux, but you could also go with FreeNAS, NAS4Free, etc.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I tried to get the ouptut from the single disks again with mdadm -E /dev/sd[a-?]1 like I did before - drives are numerated from sdc to sdn now, because sda is boot sdb is the 8TB - , but it is giving the feedback:

Code:
No md superblock detected on /dev/sdc1
after a bit of trying I got feedback with mdadm -E /dev/sdc5

But before I spam the output of all disks: Is that helpfull in any way?
Because the sx1 partition is something else as the sx5 partition isn't it?

Code:
mdadm -E /dev/sdc5
/dev/sdc5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : f8aa1491:ff2b3818:f09040de:9d83a357

  Update Time : Sat Aug  8 13:16:40 2015
  Checksum : 56aef724 - correct
  Events : 45456

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 9
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Yes, we need to see partition 5 on those disks.
Code:
mdadm -E /dev/sd[b-m]5
If the superblock meta event counters are close, we can force the array together.
 

Kristian

Active Member
Jun 1, 2013
347
84
28
I think the superblock has been somehow updateted from 0.90 to 1.2?!
May this be the problem?

mdadm output follows in a few minutes
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
/dev/sdc5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : f8aa1491:ff2b3818:f09040de:9d83a357

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 56af17ca - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 9
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
/dev/sdd5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 4a8f0190:f166db50:e831265c:89949580

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 3516e8f3 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 8
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sde5
/dev/sde5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : b974ddf6:2cb2b757:5ed3debf:8b21fea1

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 27f0480e - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 3
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdf5
/dev/sdf5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : ab45161b:24ebc43c:3bffefca:dc5038a4

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 3e81ad23 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 1
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdg5
/dev/sdg5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : cce200eb:ea99267d:52ddbb83:e41ef468

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : cc55a52f - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 4
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
mdadm -E /dev/sdh5
/dev/sdh5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 2ee72410:355ca9a7:6b93841a:db992d6d

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : b6fe9ce6 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 2
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdi5
/dev/sdi5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 3490149f:5d012246:29dab35d:6655e279

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 344aed60 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 7
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdj5
/dev/sdj5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 06f2da07:db8254da:6745872a:eb679dbb

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 3fd24e74 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 6
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdk5
/dev/sdk5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 504e71f6:1de1fc4f:f66f7a7a:9a9db788

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : c11e6941 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 11
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdl5
/dev/sdl5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 66517eeb:6869d1a1:04e41851:60165545

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 9b3be177 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 10
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdm5
/dev/sdm5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : clean
  Device UUID : 0b36655b:76cac3f5:fa8ec530:e891bb82

  Update Time : Sat Aug  8 15:35:57 2015
  Checksum : 7c284da5 - correct
  Events : 45457

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 5
  Array State : .AAAAAAAAAAA ('A' == active, '.' == missing)
Code:
mdadm -E /dev/sdn5
/dev/sdn5:
  Magic : a92b4efc
  Version : 1.2
  Feature Map : 0x0
  Array UUID : d2c3cfb9:d8fae794:4b44eb81:6e5d6600
  Name : DS-P9A:4
  Creation Time : Sun Jul  5 13:26:32 2015
  Raid Level : raid6
  Raid Devices : 12

 Avail Dev Size : 7804374912 (3721.42 GiB 3995.84 GB)
  Array Size : 39021872640 (37214.16 GiB 39958.40 GB)
  Used Dev Size : 7804374528 (3721.42 GiB 3995.84 GB)
  Data Offset : 2048 sectors
  Super Offset : 8 sectors
  State : active
  Device UUID : ce7ba3d2:36128685:fd521b46:90986f00

  Update Time : Thu Aug  6 22:18:38 2015
  Checksum : 161d611e - correct
  Events : 45437

  Layout : left-symmetric
  Chunk Size : 64K

  Device Role : Active device 0
  Array State : AAAAAA.A.AAA ('A' == active, '.' == missing)
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I think the superblock has been somehow updateted from 0.90 to 1.2?!
May this be the problem?

mdadm output follows in a few minutes
I would guess that your 12 x 4TB array was just created later than the 8 x 3TB array. The 1.2 metadata version has been the standard for more than a few years. The event counters match on almost all your disks except /dev/sdn.

Is the array running?

Code:
cat /proc/mdstat
mdadm --detail --scan
If so, it appears that Xpenology uses LVM on top of their mdadm arrays, so you'd need to get that working first, then you'd need to try to mount the LVM volume.
 

Kristian

Active Member
Jun 1, 2013
347
84
28
I would guess that your 12 x 4TB array was just created later than the 8 x 3TB array. The 1.2 metadata version has been the standard for more than a few years. The event counters match on almost all your disks except /dev/sdn.

Is the array running?




Code:
cat /proc/mdstat
mdadm --detail --scan
If so, it appears that Xpenology uses LVM on top of their mdadm arrays, so you'd need to get that working first, then you'd need to try to mount the LVM volume.
It seems to be running. Please see attached picture

Heres the output:

Code:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid6 sdj5[5] sdd5[11] sdm5[6] sdl5[9] sdf5[1] sdh5[2] sdi5[4] sdg5[7] sdk5[8] sde5[3] sdc5[10]
  39021872640 blocks super 1.2 level 6, 64k chunk, algorithm 2 [12/11] [_UUUUUUUUUUU]
 
unused devices: <none>
root@kristian-A1SAM-2750F:~#
root@kristian-A1SAM-2750F:~# mdadm --detail --scan
ARRAY /dev/md/DS-P9A:4 metadata=1.2 name=DS-P9A:4 UUID=d2c3cfb9:d8fae794:4b44eb81:6e5d6600
I have installed LVM but I have no idea on how to continue :-(
 

Attachments

Kristian

Active Member
Jun 1, 2013
347
84
28
I'm on a walk with my kids. I'll get you some directions when I get back to my house. Try to do a

vgdisplay and see what you get.
Thank you so much.
Take your time rubylaser, I will have to go to bed in an hour anyways because its getting late here and our little ones are developing to become early birds I think.

Code:
vgdisplay
  --- Volume group ---
  VG Name  vg1000
  System ID   
  Format  lvm2
  Metadata Areas  1
  Metadata Sequence No  7
  VG Access  read/write
  VG Status  resizable
  MAX LV  0
  Cur LV  1
  Open LV  0
  Max PV  0
  Cur PV  1
  Act PV  1
  VG Size  36,34 TiB
  PE Size  4,00 MiB
  Total PE  9526824
  Alloc PE / Size  9526824 / 36,34 TiB
  Free  PE / Size  0 / 0   
  VG UUID  s9cBcu-BcJf-Km9s-KiSw-hBqr-C5dN-ZUvbNS
   
  --- Volume group ---
  VG Name  ubuntu-vg
  System ID   
  Format  lvm2
  Metadata Areas  1
  Metadata Sequence No  3
  VG Access  read/write
  VG Status  resizable
  MAX LV  0
  Cur LV  2
  Open LV  2
  Max PV  0
  Cur PV  1
  Act PV  1
  VG Size  111,55 GiB
  PE Size  4,00 MiB
  Total PE  28556
  Alloc PE / Size  28556 / 111,55 GiB
  Free  PE / Size  0 / 0   
  VG UUID  TW6A5T-4Iqu-9hqS-dm5r-SSvR-2om9-fWI7l1
 

Kristian

Active Member
Jun 1, 2013
347
84
28
With your speed needs and everything else, I think you might need to learn a bit of the command line to really do this right. I would agree with Whitey and say ZFS is probably your best bet. I would do two raidz2 vdevs (one of the 3TB disks and one of the 4Tb disks). This would give you 18TB + 40TB = 58TB in one array of usable space and 4 disks for parity (very reliable).

I would do this with ZFS on Linux, but you could also go with FreeNAS, NAS4Free, etc.
Still undecided.
Freenas sounds super. And 4 parity disks seems really reliable. 58TB should be enaough for a while.

OMV on the other hand would allow to spin disks down if I remember correctly.
That would also be a nice thing.
But that probably goes on the costs of performance...

well well I wish I would have more than those 2 1/2 hours in the evening to do my research on this.
That is the problem with learning command line, too... just no time to do that :-(
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Code:
[
lvdisplay
  --- Logical volume ---
  LV Path  /dev/vg1000/lv
  LV Name  lv
  VG Name  vg1000
  LV UUID  9l9EG3-1nKf-gQy9-LveD-RtI1-Bh3t-GoC3NI
  LV Write Access  read/write
  LV Creation host, time ,
  LV Status  available
  # open  0
  LV Size  36,34 TiB
  Current LE  9526824
  Segments  1
  Allocation  inherit
  Read ahead sectors  auto
  - currently set to  2560
  Block device  252:2
   
  --- Logical volume ---
  LV Path  /dev/ubuntu-vg/root
  LV Name  root
  VG Name  ubuntu-vg
  LV UUID  pdDHKu-lceO-KAwK-y2nU-ZoWC-quF0-IRoZoq
  LV Write Access  read/write
  LV Creation host, time ubuntu, 2015-08-07 20:47:10 +0200
  LV Status  available
  # open  1
  LV Size  79,56 GiB
  Current LE  20367
  Segments  1
  Allocation  inherit
  Read ahead sectors  auto
  - currently set to  256
  Block device  252:0
   
  --- Logical volume ---
  LV Path  /dev/ubuntu-vg/swap_1
  LV Name  swap_1
  VG Name  ubuntu-vg
  LV UUID  hL4ROS-dVr4-tceq-UK0l-5YAQ-49gJ-3r4NFc
  LV Write Access  read/write
  LV Creation host, time ubuntu, 2015-08-07 20:47:10 +0200
  LV Status  available
  # open  2
  LV Size  31,99 GiB
  Current LE  8189
  Segments  1
  Allocation  inherit
  Read ahead sectors  auto
  - currently set to  256
  Block device  252:1
/CODE]
 
Last edited:

Kristian

Active Member
Jun 1, 2013
347
84
28
Try this...
mkdir -p /mnt/restore

mount /dev/vg1000/lv /mnt/restore && cd /mnt/restore && ls -la
Unfortunatelly that has not worked.

Code:
mount /dev/vg1000/lv /mnt/restore && cd /mnt/restore && ls -la
mount: Falscher Dateisystemtyp, ungültige Optionen, der
  Superblock von /dev/mapper/vg1000-lv ist beschädigt, fehlende
  Kodierungsseite oder ein anderer Fehler
  Manchmal liefert das Systemprotokoll wertvolle Informationen,
  versuchen Sie »dmesg | tail« oder so
Is there a possibility to have the error messages in english?

My translation is:

mount: wrong type of filesystem, invalid Option, Superblock of /dev/mapper/vg1000-lv is damaged, missing codepage or another fault
Sometimes the Systemprotocol can deliver usefull information.
try dmesg tail or something.

I would have posted the protocol refered to, but I have - again - no clue how to
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Is that usefull?
Code:
[  17.070414] audit: type=1400 audit(1439041067.779:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=810 comm="apparmor_parser"
[  17.070426] audit: type=1400 audit(1439041067.779:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=810 comm="apparmor_parser"
[  17.070433] audit: type=1400 audit(1439041067.779:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=810 comm="apparmor_parser"
[  17.070458] audit: type=1400 audit(1439041067.779:8): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/sbin/dhclient" pid=811 comm="apparmor_parser"
[  17.070475] audit: type=1400 audit(1439041067.779:9): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=811 comm="apparmor_parser"
[  17.070483] audit: type=1400 audit(1439041067.779:10): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=811 comm="apparmor_parser"
[  17.071327] audit: type=1400 audit(1439041067.779:11): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=810 comm="apparmor_parser"
[  17.242084] SSE version of gcm_enc/dec engaged.
[  17.363875] gpio_ich: GPIO from 452 to 511 on gpio_ich
[  17.570292] init: failsafe main process (1144) killed by TERM signal
[  18.099852] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[  18.532142] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[  18.869714] init: plymouth-upstart-bridge main process ended, respawning
[  18.879152] init: plymouth-upstart-bridge main process (1635) terminated with status 1
[  18.879178] init: plymouth-upstart-bridge main process ended, respawning
[  18.923228] IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready
[  19.344637] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready
[  19.780325] init: plymouth-stop pre-start process (2077) terminated with status 1
[  21.151511] igb 0000:00:14.0 eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[  21.151648] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  21.487702] igb 0000:00:14.1 eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[  21.487840] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[  21.803887] igb 0000:00:14.2 eth2: igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[  21.804024] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
[  22.204108] igb 0000:00:14.3 eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[  22.204246] IPv6: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
[  46.917743] audit_printk_skb: 168 callbacks suppressed
[  46.917749] audit: type=1400 audit(1439041097.614:68): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/cups/backend/cups-pdf" pid=3135 comm="apparmor_parser"
[  46.917762] audit: type=1400 audit(1439041097.614:69): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=3135 comm="apparmor_parser"
[  46.918602] audit: type=1400 audit(1439041097.614:70): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=3135 comm="apparmor_parser"
[  48.181934] EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: (null)
[  48.302028] systemd-hostnamed[3154]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname!
[  48.760777] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 174592 not in group (block 3476142836120821743)!
[  48.760783] EXT4-fs (dm-2): group descriptors corrupted!
[  57.783688] EXT4-fs (sdo1): mounted filesystem with ordered data mode. Opts: (null)
[ 4541.591633] perf interrupt took too long (2515 > 2500), lowering kernel.perf_event_max_sample_rate to 50000
[15311.081637] systemd-hostnamed[3397]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname!
[18747.843589] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 174592 not in group (block 3476142836120821743)!
[18747.843596] EXT4-fs (dm-2): group descriptors corrupted!
[24758.428409] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 174592 not in group (block 3476142836120821743)!
[24758.428416] EXT4-fs (dm-2): group descriptors corrupted!
[24780.962212] EXT4-fs (dm-2): ext4_check_descriptors: Block bitmap for group 174592 not in group (block 3476142836120821743)!
[24780.962218] EXT4-fs (dm-2): group descriptors corrupted!
root@kristian-A1SAM-2750F:~#
 

Kristian

Active Member
Jun 1, 2013
347
84
28
With your speed needs and everything else, I think you might need to learn a bit of the command line to really do this right. I would agree with Whitey and say ZFS is probably your best bet. I would do two raidz2 vdevs (one of the 3TB disks and one of the 4Tb disks). This would give you 18TB + 40TB = 58TB in one array of usable space and 4 disks for parity (very reliable).

I would do this with ZFS on Linux, but you could also go with FreeNAS, NAS4Free, etc.
Neither OMV nor FreeNAS are really what I am looking for in terms of ease of use.
I think I would prefer ZFS on Linux.
But wasn't there a rule of thumb that you need 1GB RAM for every TB of storage?


And... If we recover my data, that isn't possible (and I still hope for recovery very much, because the photos are on the big (not recovered array) because ZFS can't be expanded.
And obviously I would start with some HDDs and subsequently copy data over clearing HDDs and then adding them to the new pool.
From what I have learned that is not possible with zfs
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
UPDATE: We were about to recover some data Kristian's array with some effort (fsck, rebuilding the group descriptors, and the superblocks). But of his original 20TB of data only 4.8TB was recoverable without utilizing something like Scapel or Photorec.
 
  • Like
Reactions: Kristian

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Neither OMV nor FreeNAS are really what I am looking for in terms of ease of use.
I think I would prefer ZFS on Linux.
But wasn't there a rule of thumb that you need 1GB RAM for every TB of storage?


And... If we recover my data, that isn't possible (and I still hope for recovery very much, because the photos are on the big (not recovered array) because ZFS can't be expanded.
And obviously I would start with some HDDs and subsequently copy data over clearing HDDs and then adding them to the new pool.
From what I have learned that is not possible with zfs
You don't need 1GB of RAM for every 1TB with ZFS. That's an old recommendation for enterprise customers. If you have 16GB, you should be fine, the only thing that would suffer with the lack of RAM is performance.

With your family photos recovered off the big array, I would strongly consider moving to a different storage platform. I used to love mdadm (created tutorials, helped other's troubleshoot issues on the Ubuntuforums, etc.), but I'm only using ZFS or SnapRAID at home anymore. I use ZFS where I need fast storage (mirrors of SSDs with an Intel S3700 for my log device). ZFS is my vm storage medium. I use SnapRAID for everything else that's bulk storage (this is super flexible, supports mixed drive sizes, and up to 6 parity disks). SnapRAID + AUFS/mhddfs handles my movies, pictures, tv shows, home movies, document archives, etc.

This combo works great and can both be managed in Linux. But, you will need to get your hands dirty with the command line to manage these (they are both VERY easy to manage though).
 

Kristian

Active Member
Jun 1, 2013
347
84
28
UPDATE: We were about to recover some data Kristian's array with some effort (fsck, rebuilding the group descriptors, and the superblocks). But of his original 20TB of data only 4.8TB was recoverable without utilizing something like Scapel or Photorec.
And another time to let the whole world know: Rubylaser is my hero of the year, no century!
Thank you. You saved me from becoming divorced :)