Question:
In large NAS/DAS deployments how does the system administrator deal with migration & hardware obsolescence? I know some members here are running large data arrays and the traditional method of backing up, removing old hardware, installing new hardware then restoring the data does not make any sense cost wise and time wise. There has to be a smarter way to do it which I am unware.
On my all-in-one system I am migrating away from intel RST based software raid to a LSI 9341-8i based system due to the excellent info I've gained from the knowledgeable members here. I've been a longtime ubuntu/Win7/KVM/Virtualbox user and I am switching over to ESXi 5.5 since its feature support is better than KVM/Xen/Virtualbox. I do have a plan to migrate my current data over, but as my data storage becomes larger, this migration/upgrade/scraping cycle becomes much more complex - time consuming - and costly. So I'm curious how experts with bigger systems than mine manage the upgrade process
My own mini upgrade which prompted the question:
My current setup is to Dual boot Win 7/Ubuntu 12.04.3
intel 530 sata3 provides ubuntu/xen and other linux distros in LVM volumes
WD 500GB Black sata3 provides win7 boot volume and a lvm volume for KVM virtual machines
intel RST 2TB Raid0 ntfs volume provides common data to ubuntu & win 7 and all VMs via direct access or iscsi. It also hosts virtual box images that both Ubuntu and win7 OSes can access;
2TB raid1 ntfs (usb3 - offboard) provides long term storage and backup
VMware does not support Intel RST. So it cannot use the Sata ports of my current setup.
So, with all the knowledge I've gained from this great site
My new setup is to dual boot ESXI5.5/Ubuntu from the SSD.
install LSI 9341-8i with 5 WD Red WD7500BFCX 750GB in raid 5 providing vmfs and lvm VM targets (via an icydock 6 slot hotswap tray (modified to dampen vibration)
migrate RST raid0 ntfs array to the LSI card
switch the bios from raid mode to AHCI mode.
remove the 500GB as it becomes redundant.
Once i am fully satisfied I can perform all my tasks via esxi VMs, say goodbye to dual booting.
I am migrating to vmware because it is the only hypervisor that properly supports gpu sharing and pci passthrough. I have tried every linux based solution to pass through the k4000 card and my Blackmagic intensity Pro card to my VMs but /Xen4.3.1/KVM/ and virtualbox just cannot do it. I'll miss virt-mamager and I'm going to miss KVM/spice
In large NAS/DAS deployments how does the system administrator deal with migration & hardware obsolescence? I know some members here are running large data arrays and the traditional method of backing up, removing old hardware, installing new hardware then restoring the data does not make any sense cost wise and time wise. There has to be a smarter way to do it which I am unware.
On my all-in-one system I am migrating away from intel RST based software raid to a LSI 9341-8i based system due to the excellent info I've gained from the knowledgeable members here. I've been a longtime ubuntu/Win7/KVM/Virtualbox user and I am switching over to ESXi 5.5 since its feature support is better than KVM/Xen/Virtualbox. I do have a plan to migrate my current data over, but as my data storage becomes larger, this migration/upgrade/scraping cycle becomes much more complex - time consuming - and costly. So I'm curious how experts with bigger systems than mine manage the upgrade process
My own mini upgrade which prompted the question:
My current setup is to Dual boot Win 7/Ubuntu 12.04.3
intel 530 sata3 provides ubuntu/xen and other linux distros in LVM volumes
WD 500GB Black sata3 provides win7 boot volume and a lvm volume for KVM virtual machines
intel RST 2TB Raid0 ntfs volume provides common data to ubuntu & win 7 and all VMs via direct access or iscsi. It also hosts virtual box images that both Ubuntu and win7 OSes can access;
2TB raid1 ntfs (usb3 - offboard) provides long term storage and backup
VMware does not support Intel RST. So it cannot use the Sata ports of my current setup.
So, with all the knowledge I've gained from this great site
My new setup is to dual boot ESXI5.5/Ubuntu from the SSD.
install LSI 9341-8i with 5 WD Red WD7500BFCX 750GB in raid 5 providing vmfs and lvm VM targets (via an icydock 6 slot hotswap tray (modified to dampen vibration)
migrate RST raid0 ntfs array to the LSI card
switch the bios from raid mode to AHCI mode.
remove the 500GB as it becomes redundant.
Once i am fully satisfied I can perform all my tasks via esxi VMs, say goodbye to dual booting.
I am migrating to vmware because it is the only hypervisor that properly supports gpu sharing and pci passthrough. I have tried every linux based solution to pass through the k4000 card and my Blackmagic intensity Pro card to my VMs but /Xen4.3.1/KVM/ and virtualbox just cannot do it. I'll miss virt-mamager and I'm going to miss KVM/spice
Last edited: