I have been thinking about consolidating my storage used between from various virtual machines on my vSphere host and possibly another server.
Machines;
vSphere host
-- SABnzb server
-- Minecraft server
Business Server
-- Windows Small Business Server 2011
-- Media (Movies, Music, TV, Photos, business docs).
My first thoughts were to setup an OpenFiler iSCSI SAN server using an old C2D, MSI LGA775 board, 4GB ram and my P812 SAS controller.
After initial setup I was able to configure it for one of my two arrays (media: 7.2TB hardware raid 5) but OpenFiler could not see my second array (vSphere: 2.73TB hardware raid 5). Connecting from vSphere was simple following one of the many guides out there but connecting via the iSCSI initiator bundled with SBS 2011 I could not get a connection at all.
Things then went from bad to worse. Re-installs seemed to still have the same issues with more problem cropping up (not being able to delete an existing logical volume etc). I also had an issue where I instructed it to bond my two Intel NICs and leave the realtek alone and it bonded all 3 and then I could not get in to the management interface. I resorted to booting up a Fedora Live CD and removed the partitions on the install drive and the one raid drive OpenFiler allowed me to use and rebooted to the OpenFiler install. This time it told me there were no drives available to install too at all.
At this point I decided to ditch OpenFiler.
When playing with the OpenFiler networking to try and resolve the network bonding issue I logged in to the command line and noticed the version of Linux used seemed to be based on RHEL so this got me wondering about using CentOS (the community version of RHEL) as the simple SAN OS. I installed the full CentOS 6.3 setup (minimal may also be fine) from the Live DVD and found a couple of guides on installing the iSCSI software. Combining details from both guides I got a very simple and easy set of commands to run (1 page for Volumes -> VG -> LV setup, 1 page for iSCSI config) and within 20 minutes of installing CentOS I had both my vSphere and Windows servers attached to both of my iSCSI shared arrays. I needed to reboot the CentOS box once to refresh some config changes but this is probably just my lack of knowledge with the tgtadm parameters (there was a config refresh option I later discovered).
I have 3 NICs in my SAN box (Intel GT x2 which are PCI and the motherboards Realtek controller). I linked the vSphere initiator to one GT IP address and the SBS 2011 to the other GT IP address. Both servers could see both iSCSI shares but had their own dedicated 1GbE link in to the server. Management is via the realtek interface. All connections are plugged in to my HP 1810-24G switch with no segregation from my main network at this point. I will move on to another switch and possibly look at bonding the network ports (LACP) as the next stage.
After mounting the large media share on my SBS 2011 server I started copying some of my media from my desktop. The connectivity hardware chain is WD Green 2TB -> Broadcom NIC (onboard ASRock Extreme 4 Z68) -> HP Procurve 1810-24G -> Intel GT -> 5x 2TB Seagate Barracuda hardware raid 5 array on HP P812 controller with 1GB FBWC.
Write speeds came in at between 40 -> 95MB/s depending on file size and volume of files copied (320GB worth of TV shows with cover art for example).
Not bad for a starting point for a raid 5 array. I would imagine the WD Green is slowing down the transfer quite a bit so I will try a transfer from my Intel 520 120GB SSD next.
Because I am using a hardware raid controller and using iSCSI rather than having the server as a NAS box I am not so interested in zfs at this level. Maybe at the destination server level, but not on the SAN box.
What I now have is a SAN box which is sharing two arrays that I can connect to any VM directly (Windows or Linux) as an unformatted disk. I can grow and shrink the storage shared out on the SAN. I can decide which servers can connect to which storage pools and which users can connect from those servers if I want to. Most of my storage is managed on one expensive raid controller with one set of disks and I only have boot drives and any SSD drives local to the servers now. Speeds are ok for what I need but can probably be tweeked. Network bandwidth is something to keep an eye on but when I am getting speeds around the green -> midrange normal hard drives, this is acceptable for me. Contention between different servers / VMs may slow things down but this can be improved by adding more GbE ports (dual or quad cards) or upgrading to Fibre / Infiniband / 10GbE networking.
The package for CentOS is iscsi-target-utils (yum install iscsi-target-utils) and there is also one for the initiator. The process is called tgtd (controlled but the service tgtd [option]) and, as you would expect, the config file is in the /etc/tgt directory and contains lots of commented out example configs.
Just to be clear, iSCSI tgtd does not provide redundancy or backup of the data, it only makes it accessible to other machines via the iSCSI protocol (others are available). Maybe look at ZoL (ZFS on Linux) for data redundancy and security.
RB
Machines;
vSphere host
-- SABnzb server
-- Minecraft server
Business Server
-- Windows Small Business Server 2011
-- Media (Movies, Music, TV, Photos, business docs).
My first thoughts were to setup an OpenFiler iSCSI SAN server using an old C2D, MSI LGA775 board, 4GB ram and my P812 SAS controller.
After initial setup I was able to configure it for one of my two arrays (media: 7.2TB hardware raid 5) but OpenFiler could not see my second array (vSphere: 2.73TB hardware raid 5). Connecting from vSphere was simple following one of the many guides out there but connecting via the iSCSI initiator bundled with SBS 2011 I could not get a connection at all.
Things then went from bad to worse. Re-installs seemed to still have the same issues with more problem cropping up (not being able to delete an existing logical volume etc). I also had an issue where I instructed it to bond my two Intel NICs and leave the realtek alone and it bonded all 3 and then I could not get in to the management interface. I resorted to booting up a Fedora Live CD and removed the partitions on the install drive and the one raid drive OpenFiler allowed me to use and rebooted to the OpenFiler install. This time it told me there were no drives available to install too at all.
At this point I decided to ditch OpenFiler.
When playing with the OpenFiler networking to try and resolve the network bonding issue I logged in to the command line and noticed the version of Linux used seemed to be based on RHEL so this got me wondering about using CentOS (the community version of RHEL) as the simple SAN OS. I installed the full CentOS 6.3 setup (minimal may also be fine) from the Live DVD and found a couple of guides on installing the iSCSI software. Combining details from both guides I got a very simple and easy set of commands to run (1 page for Volumes -> VG -> LV setup, 1 page for iSCSI config) and within 20 minutes of installing CentOS I had both my vSphere and Windows servers attached to both of my iSCSI shared arrays. I needed to reboot the CentOS box once to refresh some config changes but this is probably just my lack of knowledge with the tgtadm parameters (there was a config refresh option I later discovered).
I have 3 NICs in my SAN box (Intel GT x2 which are PCI and the motherboards Realtek controller). I linked the vSphere initiator to one GT IP address and the SBS 2011 to the other GT IP address. Both servers could see both iSCSI shares but had their own dedicated 1GbE link in to the server. Management is via the realtek interface. All connections are plugged in to my HP 1810-24G switch with no segregation from my main network at this point. I will move on to another switch and possibly look at bonding the network ports (LACP) as the next stage.
After mounting the large media share on my SBS 2011 server I started copying some of my media from my desktop. The connectivity hardware chain is WD Green 2TB -> Broadcom NIC (onboard ASRock Extreme 4 Z68) -> HP Procurve 1810-24G -> Intel GT -> 5x 2TB Seagate Barracuda hardware raid 5 array on HP P812 controller with 1GB FBWC.
Write speeds came in at between 40 -> 95MB/s depending on file size and volume of files copied (320GB worth of TV shows with cover art for example).
Not bad for a starting point for a raid 5 array. I would imagine the WD Green is slowing down the transfer quite a bit so I will try a transfer from my Intel 520 120GB SSD next.
Because I am using a hardware raid controller and using iSCSI rather than having the server as a NAS box I am not so interested in zfs at this level. Maybe at the destination server level, but not on the SAN box.
What I now have is a SAN box which is sharing two arrays that I can connect to any VM directly (Windows or Linux) as an unformatted disk. I can grow and shrink the storage shared out on the SAN. I can decide which servers can connect to which storage pools and which users can connect from those servers if I want to. Most of my storage is managed on one expensive raid controller with one set of disks and I only have boot drives and any SSD drives local to the servers now. Speeds are ok for what I need but can probably be tweeked. Network bandwidth is something to keep an eye on but when I am getting speeds around the green -> midrange normal hard drives, this is acceptable for me. Contention between different servers / VMs may slow things down but this can be improved by adding more GbE ports (dual or quad cards) or upgrading to Fibre / Infiniband / 10GbE networking.
The package for CentOS is iscsi-target-utils (yum install iscsi-target-utils) and there is also one for the initiator. The process is called tgtd (controlled but the service tgtd [option]) and, as you would expect, the config file is in the /etc/tgt directory and contains lots of commented out example configs.
Just to be clear, iSCSI tgtd does not provide redundancy or backup of the data, it only makes it accessible to other machines via the iSCSI protocol (others are available). Maybe look at ZoL (ZFS on Linux) for data redundancy and security.
RB