Custom Build Data Storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ukyo

New Member
Apr 22, 2011
1
0
0
Hi Eeveryone,

For once, I found a site and forum that has real experience with the stuff I work with. :)

Our company leases servers as dedicated systems and application hosts. We have been looking
to stop using hard drives in our blades and run a central storage system that the blades
our customers use can iSCSI off of.

Initially, we spoke with Dell, and have been considering their new Compellent product line
that they recently aquired. Of course this heavily met our needs as it had dual head ends
for redundancy, allowed thin provisioning, and severeal other features. However, of course
the cost is just as impressive, as in around 30k just to get the basic basic setup with only
a single 15bay drive shelf.

I am new to developing stoage solutions as my primary is networking. I found sites like
TheBigWHS, and details and have been looking at useing an LSI 3ware card in a similar fashion.

So the idea was to build a dual nehalem xeon series server with 32gigs of ram. After reading,
and from really bad past experiences with LSI MegaRaid, and other brains, I began looking back
to (what has always been reliable for me) the LSI 3Ware series.

With expansion in mind, I am thinking of using the 3Ware 9750-8e. This gives me the dual
external ports should I decide to attempt to dual-link daisy-chan, or run two series of
daisy chained expansions. I did notice that the 3ware only supports (127?) devices as compared
to most others 256, but to be honest, I think that is more than enough for what we are doing.
I am honestly afraid that the 24Gig bandwidth alotment per external port will end up getting
taxed enough as it is when it comes to 120 or so devices.

(I am also considering getting a 3Ware 9750-16i4e and buying 2
http://www.scsi4me.com/tmc-sm-088-adapter-dual-ext-mini-sas-26-dual-mini-sas-36-pc-bracket.html
Just remap all 4 of the internal ports to external ports, so I can directly connect 5 storage chassis and
not have to daisy chain at all.)

My thoughts have been to use the norco 24-bay chassis from amazon for $350:
http://www.amazon.com/RPC-4224-Serv...?s=electronics&ie=UTF8&qid=1303442071&sr=1-15

I had been debating between the HP and Intel expanders due to the number of ports, however
I now see the Chenbro CK23601 might be a perfect fit. (Once it is available.)
http://www.chenbro.com/corporatesite/products_detail.php?sku=187

(I did notice however, that if I wanted to dual-link with this, I would have to use at
lease one SFF-8087 -> SFF-8088 adapter bracket doe dual IN since the 2nd external port seems
to be marked specifically as an OUT, and the 2nd IN is internal. A second adapter bracket
would then be needed to make an internal OUT available externally. (I have yet to bother to
look if there are any 2 ports internal<->external adaptors to save slot space.)

As I said, I am still debating dual-linking. I would hate to dual link, and have a server in
the middle of the daisy (fail)/(disco) etc, taking the whole chain off. This makes me want to
single channel chain 2 expanders of each of the raid cards external ports. Also, if i use the
one of the internal OUT ports to go external with the chenbro for dual linking daisy chain,
the chassis is only left with 20 ports again.

As far as powering the expansion chassis, I was gonna say to heck with paying for a SuperMicro
board, or a cheap $30 motherboard, and just jumper the middle green+black pins on the ATX
connector. ;) (I have done this for a small expansion tower at home.)

Any thoughts or re-considerations on this?

Raid Handling:
As for handling the raid types, I was considering setting up the drives as follows:
Split drives into groups of 24 (Raid-6'd, with 2 hot-spares per batch.)
So 20 drives of space, 2 parities, and 2 hot-spares.
My thoughts were to use the Raid features to Raid0 all the 6's together. It looked like
I could use live raid expansion to simply add in the new raid6's to the raid0 whenever we
expand and add on. Thoughts?


Redundancy:
I really liked the idea of "dual head" redundancy the Dell offered. Is there any way to setup
a dual-system like that with a 2nd server that could access the storage units? I guess it would
be each 3ware card would connect to each IN on the expander, and both 3ware cards somehow
talk to each other? I don't see how this is really possible at the moment, maybe someone has
some ideas?

Network Speed:
I am looking into 10GigE cards, potentially the Supermicro AOC-STGN-i2S Dual SFP+ 10GbE
that was reviewed here recently. http://www.servethehome.com/supermicro-aocstgni2s-dual-sfp-intel-82599-10gbe-controller-review/


Software:
I have been looking for some nice software to run the whole thing, again similar to Dell features.
Such as Thin provisioning, iSCSI, (NAS would be nice).

FreeNAS - Supports iSCSI, runs on FreeBSD (I am a FreeBSD nut.) And is well supported.
Does not currently support thin provisioning.
I am currently leaning towards this.

ZFSGuru is still too early and not stable.

OpenFiler - Supports iSCSI, closed source, no thin provisioning, runs on Linux.

GlusterPlatform - I tried it, was interesting, no iSCSI, and failed with throughput tests.


In the end, this system will have a heavy load on it, as tens of systems at a time will use
the iSCSI targets as their main operating and storage drives. Currently, most of the servers
will have 1, and occasionally dual GigE links dedicated for the iSCSI connection. These are
customers managed servers for the most part. I do still worry that after only a few of these
the 24Gig channels going to the drive expanders will fill up. Some customers are light load,
and others are heavy when it comes to drive access. Most all of our customers currently only
have a single 250gig SATA-II hard drive though.


Any information is helpful! :)
Thanks again!
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
You might want to look at nexentacore + napp-it. Given what it sounds like you are using this for, you may also want to look at Supermicro's JBOD chassis/ storage chassis. They (can) have built-in expanders, hot swap fans, and redundant PSUs and there is a power board available. Active-Active configurations are very popular in enterprise storage but that does add a lot. Any idea what your throughput needs are and usage profiles look like?