N00b System Build validation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

savage

New Member
Oct 1, 2012
3
0
0
North Carolina
Hello everyone,

I am putting together my first system that contains more than 2 drives, as work (bioinformatics/DNA sequencing) requires tens of TB.

I am going for two main traits:
1. Ease of install & maintenance - drivers built into linux kernel.
2. Software based & low cost - will be JBOD/mdadm, so I believe several "low-end" SAS cards is just as good as a high-end SAS card

As I am new to SAS, backplanes, expanders?, and the like, I thought I would post in this sub-forum.

So far, I have the decided on the following components:

A. Chassis: SC846BA-R920B http://www.supermicro.com/products/chassis/4U/846/SC846BA-R920.cfm?parts=SHOW
B. Motherboard: X9DR3-F http://www.supermicro.com/products/motherboard/xeon/c600/x9dr3-f.cfm
4x SATA2 and 2x SATA3 ports​
8x SAS ports from C606 (contains 2xSFF8087 ports to access 8 drives?)​
C. LSI MegaRAID SAS 9211-8i (2x to access 16 drives) http://www.lsi.com/products/storagecomponents/Pages/LSISAS9211-8i.aspx
D. SSD Intel 520 (2x240GB for OS mounted internally connected to SATA3 ports)
E. Hitachi Deskstar 7K4000 (24x4TB for storage) http://www.hgst.com/deskstar-7k4000
F. Intel Xeon E5-2620 (2x6core @ 2GHz) + heatsinks SNK-P0050AP4
G. Kingston ValueRAM memory (4x8GB ECC) KVR16R11D4/8 http://www.kingston.com/dataSheets/KVR16R11D4_8.pdf
H. Slim DVD drive DVM-TEAC-DVD-SBT1 http://www.amazon.com/gp/product/B004JO9MMU/

I believe I need 6 SFF8087 to SFF8087 (CBL-0108L-02) cables: http://www.provantage.com/supermicro-cbl-0108l-02~7SUPA01V.htm
2: from motherboard C606 SAS to backplane​
2: from first LSI MegaRAID SAS 9211-8i to backplane​
2. from second LSI MegaRAID SAS 9211-8i to backplane​
Then I will need 24 SATA cables to connect from the backplane to the individual drives. (And 2 SATA cables to connect to the internal SSDs)
Any corrections?

Now some questions of SAS, backplanes and linux drivers:
1. The two other main chassis choices are:
SC846BE16-R920B ( http://www.supermicro.com/products/chassis/4U/846/SC846BE16-R920.cfm ) and
SC846TQ-R900B ( http://www.supermicro.com/products/chassis/4U/846/SC846TQ-R900.cfm ).
Respectively, they contain the following backplanes:
BPN-SAS2-846EL1 ( http://www.supermicro.com/manuals/other/BPN-SAS2-846EL.pdf ) and
BPN-SAS-846TQ ( http://www.supermicro.com/manuals/other/BPN-SAS-846TQ.pdf ).
The 846EL1 and 846TQ each appear to contain 3 SAS ports to bridge to the SATA slots, so they are not built to handle the "low-end" 9211-8i? And I should stay with the 846A backplane?

2. From what I have read, the LSI MegaRAID SAS 9211-8i is one of the best supported cards under linux (CentOS 6.3) and uses the 'mpt2sas' driver. Should this "just work"?

3. Are the Intel C606 SAS drivers likewise built into the main linux kernel? And "just work"?

4. Related to 2 & 3: The 8 ports from the C606 SAS on motherboard, and the 16 ports from the 2xLSI cards, should give me access to all 24 spinning drives as JBOD which I can then use mdadm to RAID as I like?

5. Where have I gone terribly wrong?

Thank you.
 

survive

New Member
Apr 19, 2012
21
0
1
Hi savage,

I'm going to assume, based on the excellent selection of componets you are looking at, that you actually have a bit of a budget for this project. If that is correct I would certainly encourage you to contact somebody who specializes in integrating Supermicro hardware and should be able to either work with you to put together a parts list or to deliver you a complete system. If nothing else it would be worth a couple of phone calls to get some quotes which you can use to answer your questions and so you have some options you can present to whoever winds up approving the purchase.

I would be shocked if Supermicro didn't have an "expander" backplane for a 4U\24 bay chassis that simply took a couple of SFF-8087 cables from a controller to connect up to all the drive bays, that's what I think you should be looking for. Also, be aware that SUpermicro makes their own SAS controllers, so you can build your system with all supermicro parts (except for CPU & RAM) which should cut down on any compatability finger-pointing.

-Will
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I would tend to concur with Survive especially if it is important to maintain data integrity over a large volume.

I would suggest looking at either the 846E16-R1200B or 846E26-R1200B chassis as they have built in expanders so you can connect with only one controller in the machine (on-board or add-on). I would suggest aiming for a better controller as well with some cache and BBU or FBWC. LSI / IBM / Intel cards (generally all LSI under the skin) work well with most boards in single or double configurations. A 846E26-R1200B with a LSI-2960 or above should do well depending on your actual throughput requirements (bandwidth / Raid type preference etc). The E16 and E26 chassis use the SFF-8087 mini SAS connectors (as does the SC846BA-R920B you mentioned). From the backplane manual, the TQ does not.

I have just finished building a client server based on the SC846BA-R1200B and if they had a bit more budget and didn't need drive separation (Virtual host server) then the the E16 or E26 would have been a very good option. I was very impressed with the build quality on the SC846BA-R1200B but then for the price, I would expect nothing less.

If you wanted to go a slightly different way you could for the 847E26-RJBOD1 which can take 48 drives and hook it up to your E5s in a separate server. This will allow you to get the same storage using 3TB drives at a lower drive cost whilst only adding 8 more drives and having space to add another 16 drives if your requirements grow.

Just quick and dirty figures, prices from a quick Amazon search ....
3TB Hitachi: US$150
4TB Hitachi: US$275

24 * 4TB = 96TB
24 * US$275 = US$6,600

96TB / 3TB = 32 Drives
32 * US$150 = US$4,800

Difference between the SC846BA-R920B and the 847E26-RJBOD1 is around US$1,000 leaving an extra US$800 for a second chassis for the server.

You would then just have to manage the 4x SFF-8088 connectors and how to hook them up to your server. There are a number of cards with SFF-8088 connectors or you can get internal SFF-8087 -> SFF-8088 case brackets so you can connect the external cables to the MB / Controllers internal ports.

RB
 

savage

New Member
Oct 1, 2012
3
0
0
North Carolina
Will,
Good advice. Speaking with an experienced integrator could serve us well. We are somewhat constrained by budget, and will initially just buy 10 of the 24 drives. Also, I would like to thoroughly understand the options and be able to administer the system.

I would be shocked if Supermicro didn't have an "expander" backplane for a 4U\24 bay chassis that simply took a couple of SFF-8087 cables from a controller to connect up to all the drive bays, that's what I think you should be looking for. Also, be aware that SUpermicro makes their own SAS controllers, so you can build your system with all supermicro parts (except for CPU & RAM) which should cut down on any compatability finger-pointing.
Does the BPN-SAS-846A backplane which takes 6 SFF-8087 cables from controller(s) fulfill the role of "have an "expander" backplane for a 4U\24 bay chassis that simply took a couple of SFF-8087 cables from a controller to connect up to all the drive bays"? My controllers would be the C606 and LSI 9211. My impression is the C606 maxes out at controlling 8 SATA drives. And the LSI 9211-8i each max out at controlling 8 SATA drives. Perhaps I am mistaken here? If plugged into the backplane will one C606 or one LSI 9211-8i be able to control 24 drives?

If I bought a Supermicro SAS controller instead of an LSI, I would be choosing from quite a selection http://www.supermicro.com/products/nfo/storage_cards.cfm. I have found less documentation on which has a driver built into the mainline linux kernel, though Supermicro does provide their own drivers ftp://ftp.supermicro.com/driver/SAS/LSI/2108/Driver/Linux/v06.18/ ... I would prefer mainline maintained drivers, if any of their cards are so supported.
 

savage

New Member
Oct 1, 2012
3
0
0
North Carolina
I would tend to concur with Survive especially if it is important to maintain data integrity over a large volume.

I would suggest looking at either the 846E16-R1200B or 846E26-R1200B chassis as they have built in expanders so you can connect with only one controller in the machine (on-board or add-on). I would suggest aiming for a better controller as well with some cache and BBU or FBWC. LSI / IBM / Intel cards (generally all LSI under the skin) work well with most boards in single or double configurations. A 846E26-R1200B with a LSI-2960 or above should do well depending on your actual throughput requirements (bandwidth / Raid type preference etc). The E16 and E26 chassis use the SFF-8087 mini SAS connectors (as does the SC846BA-R920B you mentioned). From the backplane manual, the TQ does not.
Ahh. I was wondering why one would choose the 846E16 over the 846BA. So the E16 has an expander built into the backplane? And the expander allows a card which normally controls just 8 SATA drives to control all 24 SATA drives? So a single SFF-8087 cable from an LSI-2911-8i or LSI-2966-8i would connect to the E16 backplane and all 24 drives would appear as /dev/sdb -> /dev/sdy?

If I went with the 846BA, it seems I would be accessing the 24 drives through 6 SFF-8087 cables instead of 1 SFF-8087 cable of the 846E16. So the 846BA potentially allows for greater throughput than the 846E16 if multiple LSI-2911-8i cards are used in the 846BA?

Thanks for the feedback.

EDIT:
Looking more closely at the LSI website, it looks lie the LSI-2911-8i is an HBA which is limited to controlling 8 SATA drives, while the LSI-9266-4i
is a "MegaRAID" than can control 128 SATA drives. So the 9266-4i would be well suited to connect to the E16 backplane?
 
Last edited:

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Yep that is the idea and the E26 has two so you have the 24 drive throughput going through 8 6Gbps SAS/SATA (card dependant) lanes.

Remember a PCIe lane is around
PCie 1.0a: 250MB/s
PCIe 2.0: 500MB/s
PCIe 3.0: 1000GB/s

so a PCIe x8 2.0 card should be able to cope with around 4GB/s throughput (give or take raid levels, caching etc used) on the PCIe bus. With 8 SAS lanes (2x SFF-8087 connectors) it should be able to handle around 4.8GB/s between the drives and card max. Populating with SATA III mechanical drives you may get around 150MB/s burst per drive x 24 drives = 3.6GB/s all bursting at the same time.

Point being, for 24 mechanical SATA drives a PCIe 2.0 controller with 2x SFF-8087 connectors should have enough bandwidth available to run all the drives at top speed all the time. Of course the results are a bit different for faster SAS drives and changes dramatically for SSDs.

For the E16 / E26 they also have output connectors so you can connect to an int -> ext SAS connector bracket and chain more units together, all controlled from a single controller. Obviously there are redundancy issues with a one controller single point of failure setup.

RB