Sharing DAS storage (SAS3) between multiple servers?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Is is possible to "share" direct attach storage connected via SAS3 between multiple servers for a failover scenario? I have the following unit which comes with 2 quad SAS3 modules for connectivity to servers. Ideally I'd like to be able to connect this unit to 2 servers even if one is just in "standby" mode so that I can quickly and easily make use of the storage if my main server was to be down for whatever reason. I know this is possible via multipath via storage using something like iSCSI but I can't find a lot of information on this being done using storage connected directly to external SAS HBA's.

 

jerryxlol

Member
Nov 27, 2016
47
4
8
31
I believe this is what you are looking for. It requires good linux knowledge ;) Home · ewwhite/zfs-ha Wiki or netberg | Main
(you can skip zfs, and use any FS).

Be careful when fencing volume on multipath and another problem is this is not alua, so second server is rather passive. I have not found yet some good manual for ALUA storage. ( Iscsi mounted on virtual ip of iscsi which does not support ALUA.
 

Lix

Member
Aug 6, 2017
42
10
8
39
Windows Storage Spaces can do shared sas, not sure how well supported it is in Server 2019+
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I believe this is what you are looking for. It requires good linux knowledge ;) Home · ewwhite/zfs-ha Wiki or netberg | Main
(you can skip zfs, and use any FS).

Be careful when fencing volume on multipath and another problem is this is not alua, so second server is rather passive. I have not found yet some good manual for ALUA storage. ( Iscsi mounted on virtual ip of iscsi which does not support ALUA.
Thank you for these links, very helpful


Windows Storage Spaces can do shared sas, not sure how well supported it is in Server 2019+
Windows is out of the question but thank you for the suggestion.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Nah, it's not about losing drives to redundancy. I just don't want my data striped. I much prefer non-striped arrays for the kind of data (mostly media for streaming) that I'm working with.
 

jerryxlol

Member
Nov 27, 2016
47
4
8
31
Why it would not work with other FS ? As i mentioned. GOOD LINUX knowledge required.

Point is which node has to have the FS mounted. YOU CANNOT HAVE MOUNTED FS ON BOTH NODES.

The manual would work for other FS only at the point where ZFS is not mentioned. Therefore you need to google how to do pacemaker and corosync for other filesystem.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Why it would not work with other FS ? As i mentioned. GOOD LINUX knowledge required.

Point is which node has to have the FS mounted. YOU CANNOT HAVE MOUNTED FS ON BOTH NODES.

The manual would work for other FS only at the point where ZFS is not mentioned. Therefore you need to google how to do pacemaker and corosync for other filesystem.
I didn't say it wouldn't work, I asked the question.

With regard to pacemaker and corosync, why is that needed if the FS is only mounted on one node at a time? If both nodes can see the drives at the same time do to the multipath SAS drives and multi-controller/DAS setup, couldn't one simply unmount/mount the drives as needed?

I assume the pacemaker/corosync is only needed for realtime automatic failover?

My use case is just being able to quickly and easily mount my storage on a second server (that is already running and doing other tasks) for when my main server is down for maintenance.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
Is is possible to "share" direct attach storage connected via SAS3 between multiple servers for a failover scenario?
It should be possible with cluster aware filesystems to build hot fail-over storage systems. (In my opinion these filesystems and the configuration are complex and not worth the hassle for home lab setups.)
 
  • Like
Reactions: jerryxlol

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
It should be possible with cluster aware filesystems to build hot fail-over storage systems. (In my opinion these filesystems and the configuration are complex and not worth the hassle for home lab setups.)
I'm not looking for hot failover. I'd be fine with manually mounting the storage on Node 2 as long as both Node 1 and Node 2 can both "see" the storage at the same time.
 

jerryxlol

Member
Nov 27, 2016
47
4
8
31
It should be possible with cluster aware filesystems to build hot fail-over storage systems. (In my opinion these filesystems and the configuration are complex and not worth the hassle for home lab setups.)
I was sticking to this topic and links he asked about. I know cluster filesystems, but that is not part of this project OP asked about. So strict answers has to be told. If he mounts XFS to both nodes he will corrupt data :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I was sticking to this topic and links he asked about. I know cluster filesystems, but that is not part of this project OP asked about. So strict answers has to be told. If he mounts XFS to both nodes he will corrupt data :)
I have no intention of mounting on both Nodes at the same time. But the systems both being able to see the drives at the same time and them both being mounted at the same time is not the same thing.