DAS Shared Storage - Practices/Requirements

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Radian

Member
Mar 1, 2011
59
5
8
So I have two Dell R430's and a bunch of SAS disks (chassis to be procured) that I want to link to the two cluster nodes. I've already tried iSCSI but have SPOF issues with the SAN box (both Windows Storage Server and Synology) so looking at DAS to eliminate dependence on third party.

Lots of research but I'm not able to determine if all HBA/RAID controllers can handle dual controllers. Avago Syncro is basically what I'm after but not at the $7K asking price.

Am I correct in thinking most controllers will support dual hosts on the SAS link, I just need a backplane that is capable? Or do I need special SAS disks?

What's the verdict on the following:
  • HBA or RAID controller (with external SF8088/644 connectors)
  • Storage Spaces or RAID (for VM hosting)
  • Is RAID possible with dual hosts?

Appreciate the feedback. Cheers
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I'm assuming that you want the same disks to be displayed to both, and not just a large JBOD that can hand out individual disks to each node? If the latter, you can do that with an external SAS switch by creating partitions and diviying them out to the respective hosts. The former, shared pool of disks, to me seems to require the controller be smart and share things like cache and write information. Similar to CC-NUMA for memory, in order to not invalidate cache and overwrite each other.

I'm not sure though, just a feeling. Tagged the thread though to see what others say! ;)
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
It depends on if you want hardware or software raid. There are a few RAID cards with separate interconnects to share cache and raid information between the cards. As you mentioned they are expensive... and probably not worth the hassle.

Storage Spaces Clustering is basically software Raid 1(0) with using shared JBOD disks (in the future this requirement goes away - Storage Spaces Direct). You can Tier/Pin and have a Write Back Cache using SSD's.

The HBA's you use for shared JBOD are separate from the Raid/HBA you use to boot from.

SAS disks are already dual ported. As long as the JBOD enclosure you are has 2 controllers you should be good. Just plug into each controller port and you are good.

With Storage Spaces you can do both types of clusters. Traditional clustering or you can create a ScaleOut File Server. Which really only has one purpose; be a highly available VHD(X) container. Which you would have a Hyper-V farm use over SMB3.

The real question is what is your need and what is your budget?

Chris
 
  • Like
Reactions: Chuckleb

Radian

Member
Mar 1, 2011
59
5
8
Thanks cesmith9999, that is basically what I'm after, SOFS to host HA VHDXs. I've got the HBA's on order, 9300-8e so will be going with storage spaces. The R430's already have PERC330 for boot, so no issues. My VM's will be on the same hosts as the SOFS, is that an issue? Quantity of VM's small but VM availability a priority.

I'm looking at the Supermicro JBOD chassis SC837E26-RJBOD1 which comes with a dual controller, so I should be good to go I guess?

Budget is reasonable, but I can't push $7K for the Syncro without a chassis. Out of interest, what does the Syncro offer over what I'm trying to do, is it offering RAID between two hosts?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Syncro is one of the hardware shared disk raid cards. it would give you the hardware raid across nodes. It has a few limitations. It is only good for 2 nodes and you may still need a shared SAS chassis. and you should not try to boot from it. and you should not use the same backplanes as the disks you are booting from.

SOFS is scaleable up to 16 nodes now (with SAS switches). Up to 32 with Storage Spaces Direct (Server 2016).

You have Dell servers. Have you looked at using a Dell chassis? the MD1220 (6GB) or the new MD1420 (12 GB)?

The Supermicro chassis SC837E26-RJBOD1 should work, but I do not have any direct experience there. You would plug one cable from each node into the top 2 ports and again 1 cable from each node into the bottom 2 ports.

SOFS is not a platform that is meant to be Hyper-Converged. you should also have separate Hyper-V servers. and 2 * 10GB or better network between the Hyper-V (cluster) servers and the SOFS Cluster.

Chris