eduncan911 said: Well, some would argue that they would want the SATAs directly connected to each port - no expander. I personally subscribed to this club (e.g. I want the option to do RAID) and got my two IBM M1015s that I flashed to the LSI firmware, ready for this day. That's 2x8 ports per card, 16 total.
I'm confused by the wording here, can you not do controller based hardware RAID with drives that are attached to an expander?
There are two ways you can "scale up" to 16, 24, 36+ drives:
* Directly attach each HDD/SSD to a single port on a HBA/Raid controller. E.g, a 1-to-1 mapping.
* Or, use an "Expander" which basically "shares" the bandwidth of 4x SAS/SATA ports across as many drives as you got.
When you are looking at these Supermicro chassis, you'll notice numbers like this:
SC836TQ
SC836EL1
SC836EL2
SC836A/B
Notice the last few letters/numbers. TQ, EL1, EL2, A/B, etc. Those denote what kind of backplane is in the chassis, and how you connect the drives.
"TQ" and "A/B" models means they have an 1-to-1 direct port mapping. If you have a 16 bay "TQ" chassis, that means you need 16 SATA/SAS ports somehow to connect to all of them! This is the 1-to-1 direct port mapping I was quoted in saying above. In a 1-to-1 port mapping to, say, a RAID card, you can do Hardware RAID all day long. No problem as you are using 1 port per HDD.
Now, that can get expensive to have 1-to-1 mappings. For a 16bay chassis, if you want to use all 16 bays with 16 HDDs, that means you need a 16-port HBA RAID card. $$$ (maybe 500+?) Another option is to have 2x 8-port cards, for a lot less money. This is what i did, and bought 2x IBM M1015 a few years ago. I flashed them to the infamous LSI "IT" mode as I didn't need raid. But, you do have the option to flash them to "IR" mode if you really want hardware raid.
Another option (sometimes cheaper) is to get a small 4-port SAS/SATA HBA controller card, and an additional 16-port "Expander" card like the Intel one people are most found of for around $100. That's two cards: one to act as the "head" and another to act as the "expander." The 16-port Expander card will have 4x SFF-8087 connectors, each connector drives 4-ports. Note: This is NOT a 1-to-1 mapping any longer. See below.
"SL1, SL2" all mean some type of "Expander" chip is built in. Or, you could go with the Expander card mentioned above. Either way, these use an "Expander" chip.
Here's where things get confusing with Expander chips. The first generations could only "expand" up to 4 times each port. So, a 4-port SFF-8087 connection could scale to 16 expanded ports, 4 multiplied by each port. Then they got better, now able to share the bandwidth and scale to 24, 36, 48, up to 255 devices - off of a single SFF-8087 connection (255 for SAS2 and above).
There are three major downsides to expanders chips:
* The OS does not see each HDD directly. Instead, the host OS sees whatever the Expander shares back to the HBA card. I am fuzzy on this part myself, as I haven't had an "Expander" before myself - until now. My main issue is HDD Spin Down no longer works. Well, it might for some good SAS expanders.
* Since the OS does not see/control each HDD directly any longer, nor does the HBA card itself as the Expander is controlling all the drives, you loose "hardware raid" abilities from the HBA card.
* You total bandwidth is limited to your HBA's and Expander's Gbps design:
- SAS1 or 3Gbps SATA2 = ~1TB/s max bandwidth for ALL DRIVES
- SAS2 or 6Gbps SATA3 = ~2TB/s max bandwidth for ALL DRIVES
- SAS3 or 12Gbps = ~4TB/s max bandwidth for ALL DRIVES
Typically this doesn't matter much with home servers, as you are only streaming from 1 or 2 disks at once. Even under heavy NZB uncompression, moving and file organizing + streaming, you are hitting maybe 4 or 5 disks at once. My Seagate 4TB Blue NAS drives get about 150 to 170MB/s. So even 5x 150MB/s is only 750MB/s total - that's not even hitting the limits of old old SAS1. The story quickly changes though if you move huge volumes (10+ TB) often across drives in some type of database access.
You start to see slow-downs in SAS1 (and even SAS2) when you look at SSD drives though. SSDs will easily peg a SATA2 connection at 280 MB/s (been there). And SSDs can easily peg an SATA3 connection at 560 MB/s (been there too). So, if you have say just 4 SSDs on a SAS1 controller, while the SSDs could see up to 2.2TB/s bandwidth, you are only going to see 1 TB/s. SAS2 controllers will still be pegged by just 4 SSDs - reading/writing at the same time. 12Gbps SAS SSDs do exist, they are for Enterprise and out of everyone's budget here for home servers. Don't even waste your time searching, you'll be a sad panda like me...
Again, most home servers have no where near that load. SAS1 is fine for most.
I plan on connecting a few SSDs for "first-in" copy operating on my StableBit Drivepool like I have now, in the hot swap bays because my LGA1366 Xeon mobo does not have native 6Gbps - but my IBM M1015 (flashed to LSI IT mode) do! In case you don't know this feature of StableBit Drivepool, you should check it out... All data copied to the "drive pool" can be directed to a single "fast" SSD. once it is copied, it will be moved back into the pool of archived 4TB HDDs under a specified time.