Multiple FreeNAS Hosts Access for a JBOD/Array

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pstoianov

New Member
Jul 17, 2013
18
1
3
Hello,

I have recently started to build a HA-NAS. The idea is to have 2 servers with LSI MegaRAID-9286 & 9260 and 1x external shared SAS JBOD exporting iSCSI.
This is the build that I've scrapped together in the last few weeks of research for both nodes (servers). Each server to have:
- LSI MegaRAID-9260-8e;
- SuperMicro Server - LGA771/E5410;
- Dual port 10GbE;
- Dual port 1GbE;

Shared components:
- JBOD/enclosure ??????;
- 12x WDC RE4 2TB in RAID6;
- 3x 120GB STEC SSD CacheCade dedicated;

Does anyone have tried to build below architecture where 2 Servers shares same enclosure in ACTIVE/PASSIVE configuration with fail-over?





Which (not expensive) JBOD/enclosure do you think is able to work in such HA mode where 2 server's MegaRAID are connected to same JBOD in RAID6?
Do I need dual port I/O or single I/O also works?
How they will mount/unmount in case of failure?
Do they need some sync in between?
What will happened if a disk failed in both RAID controllers start to rebuild the RAID?

Thank you!
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That's a big topic. I'll take on one key part of it, but only at a cursory level.

If you want a pile of disks to be shared, over a SAS connection, by two servers, that pile of disks needs to have some intelligence. The most minimal solution is to find a JBOD chassis that has an expander built in that supports multiple connections to the same disks. It's the expander that provides the intelligence. Unfortunately, most expander chassis do not do this.

With the expander chassis, assuming that expander supports multiple host ports, both servers will see the same disks. That's only half the battle. You then need software on the servers to coordinate access to the disk, turning the system into a cluster. This is not trivial, and even commercial solutions can be tricky, requiring just the right versions of operating system, drivers, configuration, etc. Here, for example, is some information from LSI: Shared Storage Solutions

What OS are you planning to use and what do you intend to do with the storage? Depending on your needs, it will probably be easier and less expensive to take another approach.

Hello,

I have recently started to build a HA-NAS. The idea is to have 2 servers with LSI MegaRAID-9286 and 1x external shared SAS JBOD exporting iSCSI.
This is the build that I've scrapped together in the last few weeks of research for both nodes (servers). Each server to have:
- LSI MegaRAID-9260-8e;
- SuperMicro Server - LGA771/E5410;
- Dual port 10GbE;
- Dual port 1GbE;

Shared components:
- JBOD/enclosure ??????;
- 12x WDC RE4 2TB in RAID6;
- 3x 120GB STEC SSD CacheCade dedicated;

Does anyone have tried to build below architecture where 2 Servers shares same enclosure in ACTIVE/PASSIVE configuration with fail-over?





Which (not expensive) JBOD/enclosure do you think is able to work in such HA mode where 2 server's MegaRAID are connected to same JBOD in RAID6?
Do I need dual port I/O or single I/O also works?
How they will mount/unmount in case of failure?
Do they need some sync in between?
What will happened if a disk failed in both RAID controllers start to rebuild the RAID?

Thank you!
 
Last edited:

pstoianov

New Member
Jul 17, 2013
18
1
3
Here is the AIC SS2004 JBOD which should support it (just found): SAS/SATA JBOD -Raid Storage - SS2004 / XJ-SA26-212R-B
Expansion Slots: 3 x mini-SAS connectors per I/O module ( 2 x host interface + 1 x expansion interface)
See page 3 at doc: http://www.rackmountmart.com/dataSheet/jbod-connection.pdf

What I found so far, is that Windows Server manages this storage cluster (something similar to LSI Synchro).

Unfortunately AIC have very poor documentation because they are supplying OEMs who are responsible to write the docs.
So, I don't understand how such storage could be managed under Linux or FreeBSD.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That AIC chassis is a classic dual-IO JBOD. It uses two separate IO boards, each of which has two SAS host ports and one expansion SAS port. Luckily for you, the IO boards do allow for shared SAS - according to the specs. To use both boards to get redundant connections to dual servers, you would need dual-ported SAS drives.

Here is the AIC SS2004 JBOD which should support it (just found): SAS/SATA JBOD -Raid Storage - SS2004 / XJ-SA26-212R-B
Expansion Slots: 3 x mini-SAS connectors per I/O module ( 2 x host interface + 1 x expansion interface)
See page 3 at doc: http://www.rackmountmart.com/dataSheet/jbod-connection.pdf

What I found so far, is that Windows Server manages this storage cluster (something similar to LSI Synchro).

Unfortunately AIC have very poor documentation because they are supplying OEMs who are responsible to write the docs.
So, I don't understand how such storage could be managed under Linux or FreeBSD.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Here are some additional options for you, or other readers considering the same, to consider:

1) Use iSCSI instead of SAS. It is far easier to inexpensively build your own iSCSI server than it is to get shared SAS to work. Of course this means updating to a very fast network if you don't already have one. A few of us have been putting together storage servers based on HP DL180 G6 boxes with Infiniband or 10GbE. They are very very fast and very reasonably priced, with redundancy via mirroring or RAID but not high availability. Choices include Napp-it, Windows 2012, and others. The right choice probably depends on what you plan to do with the array.

2) Buy a commercial SAN, used of course, that provides iSCSI or shared SAS already. Examples include HP MSA2000 and Dell MD3000, both of which will fit your budget. For about the price of a new Supermicro or AIC storage chassis with dual IO, you can buy a pre-built highly available SAN array that is a generation old and thus inexpensive. These won't be as fast as some other solutions, my MSA2312 tops out at around 1GB/S for example while my DL180 G6 can push 3GB/S, but having someone else make everything work is a huge time saver. The HP for example - others will have experience with other products - can even utilize your existing SATA drives while still providing dual-porting to both RAID controllers.

3) Software-defined storage clusters. I'm just learning about HP VSA, which looks appealing as a way to turn commodity hardware into a SAN-like clustered storage system. I hope it works, because I like the concept.
 
Last edited:

pstoianov

New Member
Jul 17, 2013
18
1
3
Here are some additional options for you, or other readers considering the same, to consider:

1) Use iSCSI instead of SAS. It is far easier to inexpensively build your own iSCSI server than it is to get shared SAS to work. Of course this means updating to a very fast network if you don't already have one. A few of us have been putting together storage servers based on HP DL180 G6 boxes with Infiniband or 10GbE. They are very very fast and very reasonably priced, with redundancy via mirroring or RAID but not high availability. Choices include Napp-it, Windows 2012, and others. The right choice probably depends on what you plan to do with the array.

2) Buy a commercial SAN, used of course, that provides iSCSI or shared SAS already. Examples include HP MSA2000 and Dell MD3000, both of which will fit your budget. For about the price of a new Supermicro or AIC storage chassis with dual IO, you can buy a pre-built highly available SAN array that is a generation old and thus inexpensive. These won't be as fast as some other solutions, my MSA2312 tops out at around 1GB/S for example while my DL180 G6 can push 3GB/S, but having someone else make everything work is a huge time saver. The HP for example - others will have experience with other products - can even utilize your existing SATA drives while still providing dual-porting to both RAID controllers.

3) Software-defined storage clusters. I'm just learning about HP VSA, which looks appealing as a way to turn commodity hardware into a SAN-like clustered storage system. I hope it works, because I like the concept.
First of all - thank you all for your cooperation!

I already have 2 spare SuperMicro servers, 10x WDC RE4, 1x 9286eCC and 1x 9260-8i. I don't have yet 10GbE and SSDs.
After all research done so far, to make dual domain support with RAID Controller it requires dual port SAS drives which are killing the budget, I'm thinking to do some calculation of total costs of following:

1) HP MSA2212FC/DotHill or NetApp 3070 - I can then install 2x OpenBSD with 2x dual ports QLogic CNA to bridge FC to 10GbE Ethernet with almost no resources and no penalties.

2) To achieve fault tolerance -- buying 6 more HDDs, splitting them in half, installing minimal Linux + uCARP + DRBD with sync done via 10GbE. This solution is my cheap plan B but loosing lots of HDDs. Because of the lost and 100% redundancy I can go to RAID-5.


I've experience with EMC CX300, CX500 and CX4 and I can tell you that all commercial SAN/NAS are built in a way to limit the hardware for future upgrades and force end customer to replace hole solution, so, EMC & NetApp are typical examples.

So, does anyone of you did HA with uCARP & DRBD 9 ?

@dba, what do you mean in point "#1 - Use iSCSI instead of SAS."?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
First of all - thank to all of you for the cooperation!
...
@dba, what do you mean in point "#1 - Use iSCSI instead of SAS."?
Missing the fact that you are building a FreeNAS, I was referring to building your own DIY iSCSI storage server (iSCSI target), for example by deploying napp-it on one of the Supermicro servers containing your disks. You're trying to get shared storage under the FreeNAS that you are building, so this does not apply.
 
Last edited:

pstoianov

New Member
Jul 17, 2013
18
1
3
Missing the fact that you are building a FreeNAS, I was referring to building your own DIY iSCSI storage server (iSCSI target), for example by deploying napp-it on one of the Supermicro servers containing your disks. You're trying to get shared storage under the FreeNAS that you are building, so this does not apply.
As you know, seems openfiler is dead project (release of openfiler v3.0 was scheduled for december 2011), then I checked the FreeNAS...so, I was reading today about many issues reported by other users in term of drivers and support of 10GbE, so I'm now considering to build the HA-Storage on normal Linux OS - RHEL, Debian or CentOS or even Solaris 10/11 + DRBD, where I have full control and view of what's going on.
I'll drop an eye on napp-it + Solaris 11 + DRBD.
 
Last edited:

Anton aus Tirol

New Member
Oct 20, 2013
10
2
1
Sorry to bump the old thread but did you manage to complete your project? How's FreeNAS doing? :)

As you know, seems openfiler is dead project (release of openfiler v3.0 was scheduled for december 2011), then I checked the FreeNAS...so, I was reading today about many issues reported by other users in term of drivers and support of 10GbE, so I'm now considering to build the HA-Storage on normal Linux OS - RHEL, Debian or CentOS or even Solaris 10/11 + DRBD, where I have full control and view of what's going on.
I'll drop an eye on napp-it + Solaris 11 + DRBD.