Strange... No one is talking about OSNEXUS (Quantastor)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

XeonSam

Active Member
Aug 23, 2018
159
77
28
Is it just me or does no one here use OSNEXUS for SAN/storage. QuantaStor Software Defined Storage

It's a paid SAN solution (software defined yada) which has a community version that allows up to 10TB of storage. You can email them and get that bumped up to 100TB. It's ZFS on Linux with support for FC which is rare... and you don't need HBA's like FreeNAS, you can actually use Raid cards with BBU's.

The company is small, but the support is pretty quick if you mail them directly, even for the community edition. It had a lot of bugs back in early 2017 but it's much more stable now. And if you're a linux man, there's quite a lot of customizing you can do (can't stand solaris or freeBSD... so difficult).

FreeNAS is my NAS of choice ofcourse but for iSCSI and FC, OSNEXUS seems to do the job pretty well. Course I would think twice if I were to use it for production but for home use; can't complain.
 
  • Like
Reactions: ecosse

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Not knocking Quanta per se...but...lately, it seems every kid on the block comes up with a storage "solution" and thinks they are the best thing since sliced bread.

I'm in the middle of planning a production storage system (with a budget of ~$2M) and I've been researching and meeting with vendors for over 2 months. And I'm still not convinced any of thier offerings have any compelling advantages. Lots of buzzwords, very little meat.
 

ecosse

Active Member
Jul 2, 2013
463
111
43
Not knocking Quanta per se...but...lately, it seems every kid on the block comes up with a storage "solution" and thinks they are the best thing since sliced bread.

I'm in the middle of planning a production storage system (with a budget of ~$2M) and I've been researching and meeting with vendors for over 2 months. And I'm still not convinced any of thier offerings have any compelling advantages. Lots of buzzwords, very little meat.
In general I probably agree but I think the OP's point was around the home use angle.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
At a couple of million if you want either IOPs or capacity I would be only entertaining the large commercial offerings be that built on whatever technology you feel like the flavor of.

Back on topic I guess most people generally either want free open source or at least free or they want something that helps with skills in the workplace. Most of these vendors are host hoping to get purchased by a bigger company I think... already a lot of storage solutions out there right now and unless they have something super special in the end only the big will survive.

For storage and base network you tend to want the most reliable option, not generally a place most people are happy with much risk even if that risk is just a support risk.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The reason you do not want HBAs on FreeNAS is the same you do not want HBAs on ZFS on Linux or Solaris. You "can" use them, but you do not want to.

They have ZFS, GlusterFS, and Ceph at least.

What we need is an updated FreeNAS based on Linux. Or more accurately, a Proxmox VE plug-in/ release that closes the gap on storage management from the GUI.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
At a couple of million if you want either IOPs or capacity I would be only entertaining the large commercial offerings be that built on whatever technology you feel like the flavor of.
My requirements aren't completely off the charts. Specifically:

- Be able to handle a 40gbps stream of data (this is based on our business analysis). The storage system is fed by anywhere from 10-20 compute nodes.
- ZERO downtime. And I mean zero. If that means running xxx storage nodes, so be it.
- ZERO loss in throughput in any failure scenario. And I mean zero. The system must run at at it's expected throughput, regardless of failures.
- Capacity isn't even that big. ~30TB, which is transient, and changes fairly often.
- Be able to backup the data locally to a different set of system(s) as well as to the DR site, which is a replica of the main system.
- The networks, power, etc etc are all redundant in both sites, and they will be in Tier1 data centers.

I've met with a fair number of the big names (like Nutanix, EMC etc etc) and while they talk marketing really well, everytime I start asking pointy questions, they go "umm...umm...why don't we get back to you?".
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
When you say compute nodes are you taking HPC or enterprise apps ?
If HPC a GPFS is really where it’s at for commercial solutions, for say ESX etc then different options and yes you have to ask some hard questions and I know the kind of answers you may be getting and probably should be prepared essentially every solution will be a small concession in terms of requirements.
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
I have never seem any merits on ceph. I know it's the rage for scale out and ofcourse it's not aligned with my use case but I have yet to see the merits.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
When you say compute nodes are you taking HPC or enterprise apps ?
If HPC a GPFS is really where it’s at for commercial solutions, for say ESX etc then different options and yes you have to ask some hard questions and I know the kind of answers you may be getting and probably should be prepared essentially every solution will be a small concession in terms of requirements.
Not to take the thread off track...

The compute nodes are enterprise apps that run custom code baremetal on Linux (We could even run on *BSD). All nodes essentially run the same app but with different dataset(s).
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I have never seem any merits on ceph. I know it's the rage for scale out and ofcourse it's not aligned with my use case but I have yet to see the merits.
Would always choose GPFS over Ceph if the $$ was not an issue. Ceph certainly has its place though.
 

fibrewire

New Member
Feb 6, 2019
2
0
1
Is it just me or does no one here use OSNEXUS for SAN/storage. QuantaStor Software Defined Storage
I've been using QuantaStor in production since February of 2011, and it hasn't let me down since. Steve Umbehocker has really helped me through the details in the beginning, and I helped vet the system since its early BTRFS days. I was really blown away by its performance and still beg for additional features to be included in the trial version on an annual basis.

Some really great features are:
* high availability cluster which they call "storage grid"
* Store data in the cloud across multiple inexpensive providers like S3 and Google Drive
* native fibrechannel support
* native RAID card support with gui access to raid commands & status (run raid functions like rebuild, etc)
* Ability to granularly restore files from past snapshots easily (requires each container config per VM)
* regularly see 2X to 50X storage efficiency depending on VM similarity at the block level
* Host machines don't need a local OS, they can boot directly from and attach to Quantastor storage via iSCSI HBA (most gigabit ethernet on servers include this feature)

And the real kick in the stomach is...

*** Easily get 2X the IOPS and beyond on read/write over FreeNAS due to RAID controller use instead of HBA, also no headaches when hotswapping a disk.

Here are a couple of screenshots from 2011.
 

Attachments

WaltR

New Member
Feb 12, 2019
2
1
3
My requirements aren't completely off the charts. Specifically:

- Be able to handle a 40gbps stream of data (this is based on our business analysis). The storage system is fed by anywhere from 10-20 compute nodes.
- ZERO downtime. And I mean zero. If that means running xxx storage nodes, so be it.
- ZERO loss in throughput in any failure scenario. And I mean zero. The system must run at at it's expected throughput, regardless of failures.
- Capacity isn't even that big. ~30TB, which is transient, and changes fairly often.
- Be able to backup the data locally to a different set of system(s) as well as to the DR site, which is a replica of the main system.
- The networks, power, etc etc are all redundant in both sites, and they will be in Tier1 data centers.

I've met with a fair number of the big names (like Nutanix, EMC etc etc) and while they talk marketing really well, everytime I start asking pointy questions, they go "umm...umm...why don't we get back to you?".
I'd look at Oracle's ZFS Appliance.
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
I'd look at Oracle's ZFS Appliance.
Well kinda yes and no.
ZFSSA people have been laid off massively in September 2017.

If the requirements are really 40Gbps of I/O, the solution can only be an higher storage tier like a vMAX and an Hitachi. Maybe I’m wrong but I don’t believe any midrange solutions can handle 40Gbps nonstop.

The reason why Dell-EMC Storage people were kinda insecure is due to some of those requirements. I mean, you can get an Hitachi used by some banks as the Mainframe backend that can handle I don’t even know how many millions of IOPS and it’s 100% nonstop. But you’re gonna pay what 4/5 million euro, if not more?
And the guy here had 20 compute nodes ....
I doesn’t make any sense whatsoever.
 

WaltR

New Member
Feb 12, 2019
2
1
3
The ZS7-2 high end All Flash config should be able to handle it. 3 years ago their hybrid NAS was threatening Hitachi and vMAX for throughput. The previous version (ZS5-2) got a SPC-2 MBPS™ rating of 24,397.12 MBPS over Infiniband in 2017. That's close to 200 Gbs. It's a synthetic spec, but that's a lot of headroom.

There's always a lot of FUD around the ZFSSA, but Oracle eats its own dog food. Oracle and Oracle's cloud run exclusively on the ZFSSA. I wouldn't worry about stability and longevity. They almost exclusively market to Oracle database customers, and put a lot into Oracle DB integration, that doesn't take away from the from it's utility as a general purpose NAS.

One I like that they use industry standard components you plug drives you bought from Amazon if you wanted to. Try that with an EMC system.
 
  • Like
Reactions: tjk

fibrewire

New Member
Feb 6, 2019
2
0
1
Try that with an EMC system.
Dell removed the limitation on their server drives but you have to reflash the raid card. On the original shipped firmware it would state that the drives are invalid and blink all of the drive lights as a fault even if they were the same make and model drive. I wonder if that was an EMC initiative...
 

ekke

Member
Nov 16, 2015
166
8
18
45
Looked at HPs offerings?
Otherwise you have storpool, which is interesting.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Sorry for the necro - but community edition now allows for 40TB of storage and allows for a cluster of 4 machines and a total of 160TB - I am not sure how/if that would be represented as a 160TB pool, or 4 individual - but it seems like a nice solution - and they support FC, infiniband (possibly also iSER) - and it also seems like they support NVME over RDMA and NVME over TCP.

So it is definately more advanced than FreeNAS - only caveat is that it seems like it requires that you refresh your license every 2 years.

Under the hood they are running ZoL on Ubuntu it seems, so it should be "easy" to migrate away from them in the future in case it turns out to be a PITA.

If I had two machines and a bunch of NVME drives I would definately try to set up a cluster and see how it performs compared to my FreeNAS solution.