Using SAS expander for multi-host, multi-channel failover setup?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Hey,

Pretty new to SAS expanders, so this might sound kind of ridiculous, but I recently learned about 2 hosts connecting to 1 backplane, so bare with me...

I have 2 Supermicro hosts in cases with the TQ-style backplanes, so 1 sata-style port per drive

I'm wondering if multiple hosts can be connected to a PCI-type expander, e.g.

If one 4ch SFF-8087 connector goes from HBA to 1 input of SAS expander, and 16 to 24 output ports of expander via 4-6 SFF-8087s, then become available,

Could a second host be connected to the other input of the expander and share the 16-24 output ports?

e.g. Host 1 HBA(4) --> ExpanderIN1(4) --> ExpanderOUT(16) --> TQBackplane(16)
Host 2 HBA(4) --> ExpanderIN2(4) --> --^^^

Please don't laugh...
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
While that may work(?). I have not tried it. and I think that it would fail.

It is defiantly not a recommended architecture.

You should use an external DAS that has 2 controllers (one for channel a and one for channel b) and SAS drives.

SATA drives no not have a mechanism to share hosts. that is why there are SAS to SATA interposers. so that SATA drives can be shared. driving up the costs to your "solution".

what are you really trying to do?

Chris
 
  • Like
Reactions: AveryFreeman

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
This concept is a typical HA cluster solution ex on ZFS

You need two servers with an SAS HBA each and either a SAS disk enclosure with SAS disks (offer 2 x Sata ports per disk) or a Jbod case with a dual expander and SAS disks where you can connect one port of an SAS disk to server1 and the other to server2 or one expander to server1 and the other to server2.

This means that both servers can see all disks and can access them. If this happens simultaniously, the filesystem is mostly corrupted. You either need to care manually that only one server mounts a filesystem or you need a cluster management software or scripts to manage a failover.

ex see my http://www.napp-it.org/doc/downloads/z-raid.pdf or discussion at
 
Last edited:
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Hey, thanks for the response.

I don't actually have any SATA drives, they're 4TB and 8TB HGST SAS drives.

It's my backplanes that have SATA-style connectors - I can't seem to find the SFF-# for SATA connectors, they just seem to be called 7-pin SATA connectors (or 8 in the case of some board-powered DOMs), even though they are used in Supermicro's TQ-model SAS backplanes.

Here's the BPN-SAS-825TQ, for example:



I'm looking at doing a multipath system, you're right, the most conventional thing to do would be get an external SAS enclosure. But I don't see why it couldn't be done with backplanes, too, it's the same idea, right?

1611951116973.png

One way would be to get a BPN-SAS2-836EL1 which has four SFF-8087 connectors (for example). Then I assume I'd connect the internal HBA connector to one input on the backplane, and the HBA's external port from the other host to the backplane's other port, using an external-internal port card, like this:




I was just wondering if I could do it with PCI-style SAS expanders, because they're cheaper, and I'd rather not have to buy new backplanes, if possible...

Something like this is going for $14 on eBay (Or SAS3, which is like 10x more...):



Since they're so cheap, maybe I should grab a couple just to try it out ... I the big questions are

1) If a PCI-style expander like that would even support multipathing, and
2) If the 7-pin SATA connectors on the backplane would support multipathing...
 

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
The BPN-SAS-825TQ is for 8 disks and it has 8 Sata ports.
To be mpio capable with 8 SAS disks it would need 16 Sata connectors (two per disk)
 
  • Like
Reactions: AveryFreeman

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,088
1,504
113
SAS expanders do not act like a multiplexing interposer that you would find with SATA setups that require multipathing, so your proposed topology won't work. Unless you have specific needs for multipathing (which it doesn't sound like), you should not try to over-complicate your setup.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
This concept is a typical HA cluster solution ex on ZFS

You need two servers with an SAS HBA each and either a SAS disk enclosure with SAS disks (offer 2 x Sata ports per disk) or a Jbod case with a dual expander and SAS disks where you can connect one port of an SAS disk to server1 and the other to server2 or one expander to server1 and the other to server2.
Ahhh... Two SATA ports per disk. OK, that answers one question! That's right, you had a picture of the Silverstone CS380 in your guide, I forgot about the two SATA ports per drive slot... So my Supermicro TQ-style backplanes are out of the question.



This means that both servers can see all disks and can access them. If this happens simultaniously, the filesystem is mostly corrupted. You either need to care manually that only one server mounts a filesystem or you need a cluster management software or scripts to manage a failover.
My memory must be really bad, because I just read about all this stuff within the last week, and I had forgotten about both those points. I don't know why I was thinking there could be access by both hosts simultaneously.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
SAS expanders do not act like a multiplexing interposer that you would find with SATA setups that require multipathing, so your proposed topology won't work. Unless you have specific needs for multipathing (which it doesn't sound like), you should not try to over-complicate your setup.
I think I will take that advice and stick with my original plan of doing a storage network instead of HA direct-connect storage failover setup - I had forgotten that only one host could access the pool at a time (duh).
 

danielkr71

New Member
May 16, 2023
1
0
1
Hello,
First of all, I apologize for reviving the dead. This thread came really close to matching the scenario I have imagined implementing.

I have three Dell/EMC hosts in a Windows Datacenter cluster doing storage and Hyper-V. I have backups leaving the cluster to be backed up by a stand-alone backup exec server. I'm getting quotes for LTO9 autoloader to replace a many years old LTO5 externally SAS connected drive.

Backup Exec is running on Win Server 2016, AD servers are on DataCenter 2019, so I don't get the advantage of GRT AD backups. Also the BE server is outside the cluster and is a single point of failure. I want to install Backup Exec as a role on the cluster, and have the 3 cluster nodes connected to the tape drive, such that any node which may be hosting the Backup Exec role may have access to the tape drive. In this configuration it will always be only one server actively using the tape drive. Also, with the BE server role on the cluster, it will have the advantage of running on a OS version equal to the Active Directory servers and therefore compatible with GRT backups.

This thread seemed to be stuck/stopped at multiple hosts accessing one shared storage device potentially at the same time. Has anyone had experience with my imagined solution, where multiple servers and a single SAS device connect to the same bus but only one server ever uses it at a single time? Also I've poked around looking for iSCSI SAS bridges but they seem to be manufactured for disk arrays.

Thanks for your time,

Daniel