So I've got some issues with my new IBM M1015 controller...
It is flashed to IR-mode. That operation was successful without any hickups.. (Although I had to do it via UEFI instead of DOS)
The controller has three 3TB WD reds connected to it. The card is inserted into my spanking new SuperMicro X10SLM-+F running ESXi 5.5. The M1015 is also passed through to my virtual file server running Windows Server 2012 R2.
2 of the disks are configured as a RAID1 logical volume. This was done via LSIs MegaRAID Storage Manager which is installed on the virtual file server. It took about 2 days for the disks to initialize and then it did a rebuild. After that I migrated all my data to it. When all this was finally completed I tested to reboot the ESXi host to check that everything works as it should. Unfortunately the card started yet another rebuild (!?). In the MegaRAID Storage Manager one of the disk is listed as "degraded" and the other is listed as "rebuild". The third disk fine and is visible in the file explorer on the virtual file server.
I know the disks are working fine, no data corruption. So the question is why it is rebuilding my RAID1 array after every host reboot. The disks are now offline and I can't access my data until the rebuild finishes... Anyone else had this problem?
EDIT:
The 2 HDDs were used in another RAID1 array on my ASUS mobo (old workstation). However that should not affect the new RAID1 array as it wipes the data on the disks, or does it? Do I have to zero the disks first? I'd rather not as it takes ages on two 3TB disks...
It is flashed to IR-mode. That operation was successful without any hickups.. (Although I had to do it via UEFI instead of DOS)
The controller has three 3TB WD reds connected to it. The card is inserted into my spanking new SuperMicro X10SLM-+F running ESXi 5.5. The M1015 is also passed through to my virtual file server running Windows Server 2012 R2.
2 of the disks are configured as a RAID1 logical volume. This was done via LSIs MegaRAID Storage Manager which is installed on the virtual file server. It took about 2 days for the disks to initialize and then it did a rebuild. After that I migrated all my data to it. When all this was finally completed I tested to reboot the ESXi host to check that everything works as it should. Unfortunately the card started yet another rebuild (!?). In the MegaRAID Storage Manager one of the disk is listed as "degraded" and the other is listed as "rebuild". The third disk fine and is visible in the file explorer on the virtual file server.
I know the disks are working fine, no data corruption. So the question is why it is rebuilding my RAID1 array after every host reboot. The disks are now offline and I can't access my data until the rebuild finishes... Anyone else had this problem?
EDIT:
The 2 HDDs were used in another RAID1 array on my ASUS mobo (old workstation). However that should not affect the new RAID1 array as it wipes the data on the disks, or does it? Do I have to zero the disks first? I'd rather not as it takes ages on two 3TB disks...
Last edited: