Raid Migration Question for the experts

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rayt

New Member
Apr 18, 2013
10
0
1
Question:

In large NAS/DAS deployments how does the system administrator deal with migration & hardware obsolescence? I know some members here are running large data arrays and the traditional method of backing up, removing old hardware, installing new hardware then restoring the data does not make any sense cost wise and time wise. There has to be a smarter way to do it which I am unware.

On my all-in-one system I am migrating away from intel RST based software raid to a LSI 9341-8i based system due to the excellent info I've gained from the knowledgeable members here. I've been a longtime ubuntu/Win7/KVM/Virtualbox user and I am switching over to ESXi 5.5 since its feature support is better than KVM/Xen/Virtualbox. I do have a plan to migrate my current data over, but as my data storage becomes larger, this migration/upgrade/scraping cycle becomes much more complex - time consuming - and costly. So I'm curious how experts with bigger systems than mine manage the upgrade process :cool:


My own mini upgrade which prompted the question:

My current setup is to Dual boot Win 7/Ubuntu 12.04.3
intel 530 sata3 provides ubuntu/xen and other linux distros in LVM volumes
WD 500GB Black sata3 provides win7 boot volume and a lvm volume for KVM virtual machines
intel RST 2TB Raid0 ntfs volume provides common data to ubuntu & win 7 and all VMs via direct access or iscsi. It also hosts virtual box images that both Ubuntu and win7 OSes can access;
2TB raid1 ntfs (usb3 - offboard) provides long term storage and backup

VMware does not support Intel RST. So it cannot use the Sata ports of my current setup.

So, with all the knowledge I've gained from this great site

My new setup is to dual boot ESXI5.5/Ubuntu from the SSD.
install LSI 9341-8i with 5 WD Red WD7500BFCX 750GB in raid 5 providing vmfs and lvm VM targets (via an icydock 6 slot hotswap tray (modified to dampen vibration)
migrate RST raid0 ntfs array to the LSI card
switch the bios from raid mode to AHCI mode.
remove the 500GB as it becomes redundant.

Once i am fully satisfied I can perform all my tasks via esxi VMs, say goodbye to dual booting.

I am migrating to vmware because it is the only hypervisor that properly supports gpu sharing and pci passthrough. I have tried every linux based solution to pass through the k4000 card and my Blackmagic intensity Pro card to my VMs but /Xen4.3.1/KVM/ and virtualbox just cannot do it. I'll miss virt-mamager and I'm going to miss KVM/spice
 
Last edited:

rayt

New Member
Apr 18, 2013
10
0
1
To further add to the question asked above, I'll give the following hypothetical example of the problem:

lets say in 2 years time either my raid fails or I outgrow the storage capacity and so I wish to replace and add a 750GB disk. But the 750 has been removed from the market and so, ther are no disks available for purchase. (not likely for the reds but it could happen). Would the only choice I'd have is to completely purchase a new set of drives with a second controller and copy/mirror over the old data? Add a replacement drive that is not of the same generation or of larger capacity? Slowly swap out the drives to a newer generation? (allowing the array to rebuild itself every time a new drive is replaced) This has to be common occurrence in data centers and there must be a cost effective strategy to deal with this problem.

Thanks in advance,

Ray
 
Last edited: