Using SAS expander for multi-host, multi-channel failover setup?

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
40
Near Seattle
averyfreeman.com
Hey,

Pretty new to SAS expanders, so this might sound kind of ridiculous, but I recently learned about 2 hosts connecting to 1 backplane, so bare with me...

I have 2 Supermicro hosts in cases with the TQ-style backplanes, so 1 sata-style port per drive

I'm wondering if multiple hosts can be connected to a PCI-type expander, e.g.

If one 4ch SFF-8087 connector goes from HBA to 1 input of SAS expander, and 16 to 24 output ports of expander via 4-6 SFF-8087s, then become available,

Could a second host be connected to the other input of the expander and share the 16-24 output ports?

e.g. Host 1 HBA(4) --> ExpanderIN1(4) --> ExpanderOUT(16) --> TQBackplane(16)
Host 2 HBA(4) --> ExpanderIN2(4) --> --^^^

Please don't laugh...
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,290
405
83
While that may work(?). I have not tried it. and I think that it would fail.

It is defiantly not a recommended architecture.

You should use an external DAS that has 2 controllers (one for channel a and one for channel b) and SAS drives.

SATA drives no not have a mechanism to share hosts. that is why there are SAS to SATA interposers. so that SATA drives can be shared. driving up the costs to your "solution".

what are you really trying to do?

Chris
 
  • Like
Reactions: AveryFreeman

gea

Well-Known Member
Dec 31, 2010
2,707
932
113
DE
This concept is a typical HA cluster solution ex on ZFS

You need two servers with an SAS HBA each and either a SAS disk enclosure with SAS disks (offer 2 x Sata ports per disk) or a Jbod case with a dual expander and SAS disks where you can connect one port of an SAS disk to server1 and the other to server2 or one expander to server1 and the other to server2.

This means that both servers can see all disks and can access them. If this happens simultaniously, the filesystem is mostly corrupted. You either need to care manually that only one server mounts a filesystem or you need a cluster management software or scripts to manage a failover.

ex see my http://www.napp-it.org/doc/downloads/z-raid.pdf or discussion at
 
Last edited:
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
40
Near Seattle
averyfreeman.com
Hey, thanks for the response.

I don't actually have any SATA drives, they're 4TB and 8TB HGST SAS drives.

It's my backplanes that have SATA-style connectors - I can't seem to find the SFF-# for SATA connectors, they just seem to be called 7-pin SATA connectors (or 8 in the case of some board-powered DOMs), even though they are used in Supermicro's TQ-model SAS backplanes.

Here's the BPN-SAS-825TQ, for example:



I'm looking at doing a multipath system, you're right, the most conventional thing to do would be get an external SAS enclosure. But I don't see why it couldn't be done with backplanes, too, it's the same idea, right?

1611951116973.png

One way would be to get a BPN-SAS2-836EL1 which has four SFF-8087 connectors (for example). Then I assume I'd connect the internal HBA connector to one input on the backplane, and the HBA's external port from the other host to the backplane's other port, using an external-internal port card, like this:




I was just wondering if I could do it with PCI-style SAS expanders, because they're cheaper, and I'd rather not have to buy new backplanes, if possible...

Something like this is going for $14 on eBay (Or SAS3, which is like 10x more...):



Since they're so cheap, maybe I should grab a couple just to try it out ... I the big questions are

1) If a PCI-style expander like that would even support multipathing, and
2) If the 7-pin SATA connectors on the backplane would support multipathing...
 

gea

Well-Known Member
Dec 31, 2010
2,707
932
113
DE
The BPN-SAS-825TQ is for 8 disks and it has 8 Sata ports.
To be mpio capable with 8 SAS disks it would need 16 Sata connectors (two per disk)
 
  • Like
Reactions: AveryFreeman

BlueFox

Well-Known Member
Oct 26, 2015
1,410
794
113
SAS expanders do not act like a multiplexing interposer that you would find with SATA setups that require multipathing, so your proposed topology won't work. Unless you have specific needs for multipathing (which it doesn't sound like), you should not try to over-complicate your setup.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
40
Near Seattle
averyfreeman.com
This concept is a typical HA cluster solution ex on ZFS

You need two servers with an SAS HBA each and either a SAS disk enclosure with SAS disks (offer 2 x Sata ports per disk) or a Jbod case with a dual expander and SAS disks where you can connect one port of an SAS disk to server1 and the other to server2 or one expander to server1 and the other to server2.
Ahhh... Two SATA ports per disk. OK, that answers one question! That's right, you had a picture of the Silverstone CS380 in your guide, I forgot about the two SATA ports per drive slot... So my Supermicro TQ-style backplanes are out of the question.



This means that both servers can see all disks and can access them. If this happens simultaniously, the filesystem is mostly corrupted. You either need to care manually that only one server mounts a filesystem or you need a cluster management software or scripts to manage a failover.
My memory must be really bad, because I just read about all this stuff within the last week, and I had forgotten about both those points. I don't know why I was thinking there could be access by both hosts simultaneously.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
254
23
18
40
Near Seattle
averyfreeman.com
SAS expanders do not act like a multiplexing interposer that you would find with SATA setups that require multipathing, so your proposed topology won't work. Unless you have specific needs for multipathing (which it doesn't sound like), you should not try to over-complicate your setup.
I think I will take that advice and stick with my original plan of doing a storage network instead of HA direct-connect storage failover setup - I had forgotten that only one host could access the pool at a time (duh).