Supermicro 846 series chassis, backplane model SAS2-846EL1, compatible HBA

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tim

Member
Nov 7, 2012
105
6
18
Hi

Got a question about the backplane model SAS2-846EL1 found on the SC846 series chassis.
The SC846BE16-R920B chassis is the one I'm looking at.

See the pdf documentation here, Appendix E, page E-3.
http://www.supermicro.nl/manuals/chassis/tower/SC846.pdf

It's a 24 bay chassis, but the backplane only got 3 SFF-8087 connectors.
Number 7 to 9 on the illustration, marked as Primary SAS connector: PRI_J0 (and J1 and J2).
The three other SFF-8087 are for the EL2 variant of the backplane so please ignore them.

On page E-5 we can read that: (loosely quoted to reflect only this backplane model)
"The primary SAS connectors provide expander features including cascading and failover.
From right to left the ports are Primary 0, Primary 1"

Not sure what they mean by that, is the Primary 0 for cascading and Primary 1 for failover?
And shouldn't that be Primary1 and Primary2 instead?
(Or, I can see that in a multiple cascading system you can use Primary 0 as a cascading port as well)

And as we can see on page E-13, a common way to connect a HBA to the backplane is at Primary 0.
With a SFF-8087 cable.

The rest of the examples illustrates the usage of Primary 1 as a cascading port only.
Primary 2 connector is never used in the illustrations. I guess failover only.

Another point is that on the chassis webpage it states:
"Single input/output SFF 8087 connectors".

My understanding is that a SFF-8087 connector/cable can handle 4 SAS/SATA 6Gb/s channels, providing for 4 HDD's.
So how can that backplane, using only 1 SFF-8087 connector, provide for the maximum of 24 HDD's at 6Gb/s?
Even if all three SFF-8087 connectors can be used as input, it'll cover only 12 of the HDD's, as far as know. (Or each needs to handle 8 HDD's).
I guess my knowledge is messed up, so please educate me on this topic.

The reason I'm asking is that I need to figure out if this chassis with that backplane can work with my LSI SAS9211-8i 6Gb/s HBA.
The card got 2 SFF-8087 ports, each can provide 4 SAS/SATA 6Gb/s HDD's.

So the question is, since the LSI HBA card has 2 SFF-8087 to provide for 8 drives, and the backplane only got 1 input SFF-8087 connector,
I would only be able to use 4 of the 24 drivebays in the chassis?
(Or is there a dual SFF-8087 to singel SFF-8087 cable that can give me all 8 drives, or what's the solution here?)

Basically, I'm not going to be happy using my HBA card with this backplane I guess?
My primary need is to be able to use the HBA that I've got, no need for more then 8 drives in a while.
Also, the HBA needs to be compatible with ESXi 5.1 so that I can use passthrough (the whole HBA to a virtualmachine running my ZFS based NAS and with IT firmware)
But if that's not possible, what's the solution to get all 24 drivebays operational? (or anything from 8 and upwards)
 

supermacro

Member
Aug 31, 2012
101
2
18
That's the beauty of using SAS expanders. You can use one SFF-8087 connector to controll/manage all 24 drives. Normally you will just need to connect one connector from your RAID to J0 and that's it. If you want to cascade then J1 will go to J0 on the next box.

There is, however, some degradation in performance as opposed to using TQ or A chassis but not that bad.

I don't think you can use failover with SATA drives (someone else can chime in) but you can definitely use SAS drives with E26 chassis.

So what was your questions, again?
 

Tim

Member
Nov 7, 2012
105
6
18
Oh, I forgot about the expander. Sorry.

So, what you're saying is that from my LSI SAS9211-8i HBA, I can take 1 of the SFF-8087 ports (that's only supposed to support 4 HDD's at 6Gbps) and connect it to the Primary 0 SFF-8087 connector on the backplane.
And thanks to the expander on the backplane that will allow me access all the 24 HDD's the chassis is able to carry.

I'm not using RAID, I'm using IT firmware so it's a JBOD setup.

How would I see the drives in let's say ESXi 5.1 (or when passed through to FreeBSD or Solaris) since the LSI HBA would list 4 drives in my current setting.
Is the backplane reporting back the rest of the disks (6 on each "channel" on the LSI card). And is this a sane setup for virtualization (with the HBA in vt-d passthrough mode)
I just thought this would be a hardware limit on the LSI card (to list only the max number of drives it can handle in it's basic form).
Nice to know if my 8 port HBA can handle 24 drives due to the expander, and report it nicely back to a virtualmachine when the HBA is running in passthrough mode.

And yes, degradation would be bad I guess, as I will end up with 24 physical HDD's sharing 4x 6Gbps connection, so I'll end up with a system 6 times slower (1Gbps per HDD).
I'll miss some speed on the fastest spinning HDD (I'm using SATA3 6Gbps HDD's, not SAS HDD's) internally.
But the Gigabit LAN is restricting the disk speed as well.
And file copy is not that often or time critical (only for backup) - the speed is still enough for media sharing/streaming.

A question about the degradation, is that a constant? will every drive drop from 6 to 1 Gbps, is will this be a dynamic change as I add HDD's to the system?
And if it's dynamic, and I got 24 drives, will it give me 6Gbps if only 1 to 4 HDD is accessed at a time?
Sorry for asking stupid questions tonight.

Cascading and/or failover is not important at this time.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Tim, the expander works a bit like an Ethernet switch. Your connection to the expander limits your total throughput, not the speed of each individual drive. With one SAS x4 connection to your chassis and its expander, you'll have a theoretical total bandwitch of 6 Gbps x4=24 Gbps - roughly 2 Gigabytes per second in the real world. That's enough for four fast SSD drives or 20 desktop-class SATA hard drives - or say 200 SSD drives with most of them just transferring a tiny bit of data while a few of them run at full speed. Basically any number of drives of any type doing anything they want, with the only limitation that the total bandwidth to your server is limited to 24Gbps.

Oh, I forgot about the expander. Sorry.

So, what you're saying is that from my LSI SAS9211-8i HBA, I can take 1 of the SFF-8087 ports (that's only supposed to support 4 HDD's at 6Gbps) and connect it to the Primary 0 SFF-8087 connector on the backplane.
And thanks to the expander on the backplane that will allow me access all the 24 HDD's the chassis is able to carry.

I'm not using RAID, I'm using IT firmware so it's a JBOD setup.

How would I see the drives in let's say ESXi 5.1 (or when passed through to FreeBSD or Solaris) since the LSI HBA would list 4 drives in my current setting.
Is the backplane reporting back the rest of the disks (6 on each "channel" on the LSI card). And is this a sane setup for virtualization (with the HBA in vt-d passthrough mode)
I just thought this would be a hardware limit on the LSI card (to list only the max number of drives it can handle in it's basic form).
Nice to know if my 8 port HBA can handle 24 drives due to the expander, and report it nicely back to a virtualmachine when the HBA is running in passthrough mode.

And yes, degradation would be bad I guess, as I will end up with 24 physical HDD's sharing 4x 6Gbps connection, so I'll end up with a system 6 times slower (1Gbps per HDD).
I'll miss some speed on the fastest spinning HDD (I'm using SATA3 6Gbps HDD's, not SAS HDD's) internally.
But the Gigabit LAN is restricting the disk speed as well.
And file copy is not that often or time critical (only for backup) - the speed is still enough for media sharing/streaming.

A question about the degradation, is that a constant? will every drive drop from 6 to 1 Gbps, is will this be a dynamic change as I add HDD's to the system?
And if it's dynamic, and I got 24 drives, will it give me 6Gbps if only 1 to 4 HDD is accessed at a time?
Sorry for asking stupid questions tonight.

Cascading and/or failover is not important at this time.
 
  • Like
Reactions: T_Minus

Tim

Member
Nov 7, 2012
105
6
18
dba, thank you for that clarification.

The Controller handles 4 channels of 6Gbps on each SFF-8087 connector, giving me the 24Gbps (in theory 3072MBps but you say it gives me 2GB, 2048MB)
Is it the expander that limits the speed from 3GB to 2GB or just the overhead in the protocoll used? (that's a lot of overhaed)
In theory the bandwith between the HBA and motherboard (PCIe 2.0 8 lanes) is 4000MBps so there should be enough bandwith to handle 3GB (don't know of the overhead between board and HBA)

2048MBps shared on 24 drives gives each 85MBps to work with, not a likely scenario to get into at home so I guess the limits on a system like this is more theoretical then practical.
On my current system I'll have more then enough bandwith and todays 7200rpm hdd's won't run out of bandwith until I use 12 disk at the time at full speed.

I guess the vt-d passthrough is not a problem either so a "downgrade" to the 846BA-R920B chassis with 6 SFF-8087 (no expander) is not needed.
Also that would cause me to invest in more HBA's to access all drives in the future.

Thanks for teaching me new stuff today.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Tim,

Based on my tests, the real-world maximum throughput of the current genration of LSI PCIe2 x8 cards is between 2,500 and 3,100 MB/Second depending on CPU and chipset, firmware, driver, configuration, etc. This maximum is achieved when performing high queue depth sequential reads or writes, or large-block random reads or writes, against very fast directly attached disks - SSDs usually. The RAID cards with large caches can achieve this or close with smaller block workloads as well - at least in some test scenarios. With less than optimum drives, a more normative workload, and an expander in the wiring, you'll be able to utilize less than the maximum throughput. The 2GB/Second is a bit of a guess and/or assumption that should be used for rough planning purposes - your exact throughput will depend on a number of variables.


dba, thank you for that clarification.

The Controller handles 4 channels of 6Gbps on each SFF-8087 connector, giving me the 24Gbps (in theory 3072MBps but you say it gives me 2GB, 2048MB)
Is it the expander that limits the speed from 3GB to 2GB or just the overhead in the protocoll used? (that's a lot of overhaed)
In theory the bandwith between the HBA and motherboard (PCIe 2.0 8 lanes) is 4000MBps so there should be enough bandwith to handle 3GB (don't know of the overhead between board and HBA)

2048MBps shared on 24 drives gives each 85MBps to work with, not a likely scenario to get into at home so I guess the limits on a system like this is more theoretical then practical.
On my current system I'll have more then enough bandwith and todays 7200rpm hdd's won't run out of bandwith until I use 12 disk at the time at full speed.

I guess the vt-d passthrough is not a problem either so a "downgrade" to the 846BA-R920B chassis with 6 SFF-8087 (no expander) is not needed.
Also that would cause me to invest in more HBA's to access all drives in the future.

Thanks for teaching me new stuff today.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
dba,

Thank you for the numbers.
I've concluded that this setup is going to serve me well for the time beeing.

Regarding speed, I'll have to test the numbers I get later.
But I'm going to use a Xeon E3-1245v2 on the Supermicro X9SAE motherboard, latest IT firmware on the HBA and latest drivers in ESXi 5.1, and passthrough of the HBA to a FreeBSD virtualmachine.

Now I have to figure out if the chassis with the new rails will fit in my 4postrack25 rack.
(I see in another thread here that it might be a problem, but after hearing from supermicro it might be a fault by the user instaed).