Raid 5 Rebuild Failed with 3 Different Drives?

Discussion in 'RAID Controllers and Host Bus Adapters' started by Samir, Nov 22, 2018.

  1. Samir

    Samir Active Member

    Joined:
    Jul 21, 2017
    Messages:
    322
    Likes Received:
    47
    I have an older DL380 G5 with the P400 installed. I have 2 RAID5 volumes, one consisting of 4x 146GB drives (500GB+ usable), and the second of 4x 300GB drives (850GB+ usable). I got the server set up this way, so that's the way I left it.

    I'm pretty familiar with RAID and used it back in the late 1990s on a Mylex DAC960SUI with 2nd generation Cheetah drives. These days I prefer RAID1 to RAID5 if I use RAID at all.

    So drive #8 seemed to indicate a failure with a red light and the volume was acting sluggish. Hooked up a monitor and rebooted the system to see what the bios messages were--sure enough, bios indicating #8 is failed or imminent failure. No problem as I just got some 300GB HP G8+ caddy sas drives. I swapped a drive into the older style caddy, powered down the server, swapped the drive, rebooted and then allowed the controller to start the rebuild and went into the bios to watch it while it did so.

    After a few hours, this new drive is also showing a red light and the bios is indicating something along the lines of first phase of restore complete or something like that (can't remember exact phrase now). I exited the bios and rebooted and the bios again indicates that drive #8 has failed.

    Okay, I thought to myself--a bad drive. I swap the other replacement drive to the G5 style caddy, power down the server and reboot to the bios again and let it start the rebuild. After a few hours--another red light, and the same message about a failed drive when I reboot.

    So my question here is, did I really even have a bad drive, or did something fail with the backplane instead?
     
    #1
  2. Samir

    Samir Active Member

    Joined:
    Jul 21, 2017
    Messages:
    322
    Likes Received:
    47
    No thoughts?
     
    #2
  3. nthu9280

    nthu9280 Well-Known Member

    Joined:
    Feb 3, 2016
    Messages:
    1,158
    Likes Received:
    272
    Hope you have a good backup.
    3 drives on the same port my hunch is that it most likely a Backplane/port issue. Were you able check the failed drives with another hba for SMART status and badblocks? That will confirm whether it's drive issue or not.
     
    #3
    Last edited: Nov 28, 2018
  4. Samir

    Samir Active Member

    Joined:
    Jul 21, 2017
    Messages:
    322
    Likes Received:
    47
    It was just a volume to throw stuff and run portable programs that we have copies elsewhere, so not real data loss. And we also have a backup. :) Can't move forward unless you backup. ;)

    I am suspecting the backplane too. I don't have another hba, so what other ideas for testing the port? I was thinking of moving one of the known working drives of the raid to that port, but I forgot if the HP will recognize this properly or not. The other test I was thinking of doing is to remove all the drives of that raid put a pair of new drives (different size) and see if they have an issue when creating a new raid1. Thoughts?
     
    #4
Similar Threads: Raid Rebuild
Forum Title Date
RAID Controllers and Host Bus Adapters LSI Sensitive drive errors / Rebuild raid 5? Feb 9, 2015
RAID Controllers and Host Bus Adapters IBM M1015 flashed to IR-mode rebuilds RAID after each host reboot. Dec 27, 2013
RAID Controllers and Host Bus Adapters "Safe ID decryption failed" MegaRAID 9280-8e Dec 2, 2018
RAID Controllers and Host Bus Adapters ServeRaid M5210 - Win 10 driver? Hardware IO error? Nov 27, 2018
RAID Controllers and Host Bus Adapters Weird functionality of RS2WC040 RAID board Nov 25, 2018

Share This Page