How would you "link" 2x SFF-8087 connectors, 4-ports each/8-ports total, to the Supermicro SAS2 SFF-8087 backplane - that has a single Expander chip - to double the 24 Gb/s to 48 Gb/s?
Is there any special configuration I need to do on the LSI drivers, in Linux? The expander chip is LSI as well as the LSI 9211-8i.
Not talking about 2x HBAs for Multi-Path. That's only available in the EL2 models.
Apparently it is possible with the EL1, single expander chip models via some sort of "8-lane linking" (see below) with two SFF-8087 cables to a single HBA.
/TL;DR
So, I've researched this for too many years (8+) and always landed on, "I can only use one SFF-8087 cable, with 4-ports, from my HBA to the backplane. That's a maximum of 24Gbps." It's the SAS2 SFF-8087 backplane, BPN-SAS2-846EL1 - one single expander chip.
Again, today I was searching during a rebuild - since I will be adding dual-40 Gb/s QSFP+ to this box because, come on, I have to be able to use those two extra SFF-8087 connectors somehow.
Then I ran across this website which mentions something I have never read before:
The manual mentions nothing like this: https://fuzhaopeng.files.wordpress.com/2020/03/bpn-sas2-846el.pdf
I searched and bit more and found one person that claims to have tested linking all 8-ports to the SAS2 backplane; but, "it didn't work all that well for me." That's was the most I got out of the old thread.
So, the BPN-SAS2-846EL1 backplane in the Supermicro SC846 chassis has 3x SFF-8087 ports, and the manual lists all three as Primary.
Would anyone know how to link 8x ports, with dual SFF-8087 cables, to this backplane?
Is it as simple as plugging in two cables, and "poof", you have 48 Gb/s?
I have 10+ Enterprise SATA3 SSDs I can test this with that all peg the 540 MB/s bandwidth of SATA3. So a RAID0/ZRAID0 should yield around 50 Gb/s.
I just need some pointers for how to configure this. Running Linux/Debian.
Is there any special configuration I need to do on the LSI drivers, in Linux? The expander chip is LSI as well as the LSI 9211-8i.
Not talking about 2x HBAs for Multi-Path. That's only available in the EL2 models.
Apparently it is possible with the EL1, single expander chip models via some sort of "8-lane linking" (see below) with two SFF-8087 cables to a single HBA.
/TL;DR
So, I've researched this for too many years (8+) and always landed on, "I can only use one SFF-8087 cable, with 4-ports, from my HBA to the backplane. That's a maximum of 24Gbps." It's the SAS2 SFF-8087 backplane, BPN-SAS2-846EL1 - one single expander chip.
Again, today I was searching during a rebuild - since I will be adding dual-40 Gb/s QSFP+ to this box because, come on, I have to be able to use those two extra SFF-8087 connectors somehow.
Then I ran across this website which mentions something I have never read before:
And...With typical 8 lanes HBA or controller, 48Gb/s board to controller bandwidth is enough for...(snip)
Wait, what?!?With typical 8 lanes HBA and controller and two SAS cable from the backplane, the maximum speed is 48Gb/s.
The manual mentions nothing like this: https://fuzhaopeng.files.wordpress.com/2020/03/bpn-sas2-846el.pdf
I searched and bit more and found one person that claims to have tested linking all 8-ports to the SAS2 backplane; but, "it didn't work all that well for me." That's was the most I got out of the old thread.
So, the BPN-SAS2-846EL1 backplane in the Supermicro SC846 chassis has 3x SFF-8087 ports, and the manual lists all three as Primary.
Would anyone know how to link 8x ports, with dual SFF-8087 cables, to this backplane?
Is it as simple as plugging in two cables, and "poof", you have 48 Gb/s?
I have 10+ Enterprise SATA3 SSDs I can test this with that all peg the 540 MB/s bandwidth of SATA3. So a RAID0/ZRAID0 should yield around 50 Gb/s.
I just need some pointers for how to configure this. Running Linux/Debian.
Last edited: