Lifting the skirt of Openfiler and then doing away with it altogether.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I have been thinking about consolidating my storage used between from various virtual machines on my vSphere host and possibly another server.

Machines;
vSphere host
-- SABnzb server
-- Minecraft server

Business Server
-- Windows Small Business Server 2011
-- Media (Movies, Music, TV, Photos, business docs).

My first thoughts were to setup an OpenFiler iSCSI SAN server using an old C2D, MSI LGA775 board, 4GB ram and my P812 SAS controller.

After initial setup I was able to configure it for one of my two arrays (media: 7.2TB hardware raid 5) but OpenFiler could not see my second array (vSphere: 2.73TB hardware raid 5). Connecting from vSphere was simple following one of the many guides out there but connecting via the iSCSI initiator bundled with SBS 2011 I could not get a connection at all.

Things then went from bad to worse. Re-installs seemed to still have the same issues with more problem cropping up (not being able to delete an existing logical volume etc). I also had an issue where I instructed it to bond my two Intel NICs and leave the realtek alone and it bonded all 3 and then I could not get in to the management interface. I resorted to booting up a Fedora Live CD and removed the partitions on the install drive and the one raid drive OpenFiler allowed me to use and rebooted to the OpenFiler install. This time it told me there were no drives available to install too at all.

At this point I decided to ditch OpenFiler.

When playing with the OpenFiler networking to try and resolve the network bonding issue I logged in to the command line and noticed the version of Linux used seemed to be based on RHEL so this got me wondering about using CentOS (the community version of RHEL) as the simple SAN OS. I installed the full CentOS 6.3 setup (minimal may also be fine) from the Live DVD and found a couple of guides on installing the iSCSI software. Combining details from both guides I got a very simple and easy set of commands to run (1 page for Volumes -> VG -> LV setup, 1 page for iSCSI config) and within 20 minutes of installing CentOS I had both my vSphere and Windows servers attached to both of my iSCSI shared arrays. I needed to reboot the CentOS box once to refresh some config changes but this is probably just my lack of knowledge with the tgtadm parameters (there was a config refresh option I later discovered).

I have 3 NICs in my SAN box (Intel GT x2 which are PCI and the motherboards Realtek controller). I linked the vSphere initiator to one GT IP address and the SBS 2011 to the other GT IP address. Both servers could see both iSCSI shares but had their own dedicated 1GbE link in to the server. Management is via the realtek interface. All connections are plugged in to my HP 1810-24G switch with no segregation from my main network at this point. I will move on to another switch and possibly look at bonding the network ports (LACP) as the next stage.

After mounting the large media share on my SBS 2011 server I started copying some of my media from my desktop. The connectivity hardware chain is WD Green 2TB -> Broadcom NIC (onboard ASRock Extreme 4 Z68) -> HP Procurve 1810-24G -> Intel GT -> 5x 2TB Seagate Barracuda hardware raid 5 array on HP P812 controller with 1GB FBWC.

Write speeds came in at between 40 -> 95MB/s depending on file size and volume of files copied (320GB worth of TV shows with cover art for example).

Not bad for a starting point for a raid 5 array. I would imagine the WD Green is slowing down the transfer quite a bit so I will try a transfer from my Intel 520 120GB SSD next.

Because I am using a hardware raid controller and using iSCSI rather than having the server as a NAS box I am not so interested in zfs at this level. Maybe at the destination server level, but not on the SAN box.

What I now have is a SAN box which is sharing two arrays that I can connect to any VM directly (Windows or Linux) as an unformatted disk. I can grow and shrink the storage shared out on the SAN. I can decide which servers can connect to which storage pools and which users can connect from those servers if I want to. Most of my storage is managed on one expensive raid controller with one set of disks and I only have boot drives and any SSD drives local to the servers now. Speeds are ok for what I need but can probably be tweeked. Network bandwidth is something to keep an eye on but when I am getting speeds around the green -> midrange normal hard drives, this is acceptable for me. Contention between different servers / VMs may slow things down but this can be improved by adding more GbE ports (dual or quad cards) or upgrading to Fibre / Infiniband / 10GbE networking.

The package for CentOS is iscsi-target-utils (yum install iscsi-target-utils) and there is also one for the initiator. The process is called tgtd (controlled but the service tgtd [option]) and, as you would expect, the config file is in the /etc/tgt directory and contains lots of commented out example configs.

Just to be clear, iSCSI tgtd does not provide redundancy or backup of the data, it only makes it accessible to other machines via the iSCSI protocol (others are available). Maybe look at ZoL (ZFS on Linux) for data redundancy and security.

RB
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I have OpenFiler installed, but have not been using it for some time due to reasons similar to what you mention. There are lots of folks that use it successfully though.
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
I had used openfiler at work for overflow when our SAN was starting to fill. I ran into all the same problems that RimBlock experienced. I have tried numerous opensource SAN solutions and have walked away from all of them.
At this point in time I am using Server 2008 R2, iSCSI targets - Storage Server at the house. I have not had a single issue with it to date albeit the writes can be a tad slow when using VHDs.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I managed to find an person selling an old Asus P5Q Pro LGA775 board and C2D 8400 (mine is a 6250 or some such). THe advantage is that is has 2x PCI2 2.0 x16 slots plus 3x PCIe x1 and 2x PCI slots.

I was happy to find it works fine with the HP812 so I then added my Intel quad port ET NIC, one PCI GT NIC and I also had to add a 8400GS (PCI) video card I luckily had handy.

I put CentOS minimal on, bonded the quad network ports and put them on a different subnet via a VLan on the switch. I then sorted out the iSCSI targets which was easy as the volumes and volume groups were already there. I took a single GbE connection from each of the two servers and added them to ports on the same VLan (the other ports from the servers went to the main network) and after tweeking the firewall they all connected and the data I had previously copied over was there.

I have just copied 150GB (large files) from my client PC (Hitachi Green drive -> Broadcom nic) to my Win SBS 2011 Standard server and was getting write speeds to the array on the SAN of around 80->95MB/s (remember the storage is on a raid 5 array with 1GB FBWC provisioned as 25% read / 75% write)

Not too shabby for a simple setup.

I am sure there are lots of people happy with Openfiler but I just hit problem after problem so went back to basics and it just seemed so much easier. It is not a turn key solution but it is simple and the steps are well documented (Install, LVM for storage setup, Bonding for network link grouping and iSCSI target for sharing the storage).

I used CentOS minimal in the end as with the Live DVD install I had a number of problems getting the network bonding and routing working. Also, minimal is better for a server build anyway.

RB
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
These are the types of guides STH needs. Great feedback RB!
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Just curious but why iSCSI? Why not just use NFS?
iSCSI is a block level sharing scheme and so you can pass a block of storage to another server which the server sees as another bare disk drive. That server can format it using their perfered filesystem (ext3/ext4 for Linux, NTFS for Windows). The remote server manages the storage allocated as if it was a disk directly connected except rather than connecting via a sata / sas cable you connect via a ethernet / Fibre Channel / Infinband connection.

So in essence I can get multiple disks, attach them to a single raid card to create one big pool of storage and then chop this storage up as I want and pass it to other machines as if they were attaching to unformatted hdds.

With NFS, filesystem management becomes the duty of the NFS server rather than the server using the NFS share.

RB
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
An interesting read from the guys at UMass on performance in a one to one environment comparing iSCSI and NFS (2/3/4) over an IP based network.

9 Concluding Remarks
In this paper, we use NFS and iSCSI as specific instantiations of file- and block-access protocols and experimentally compare their performance in environments where storage is not shared across client machines. Our results demonstrate that the two are comparable for dataintensive workloads, while the former outperforms the latter by a factor of 2 or more for meta-data intensive workloads. We identify aggressive meta-data caching and update aggregation allowed by iSCSI to be the primary reasons for this performance difference. We propose enhancements to NFS to improve its meta-data performance and present preliminary results that show its effectiveness. As part of future work, we plan to implement this enhancement in NFS v4 and study its performance for real application workloads.
Full paper is here (pdf).

RB