Strange... No one is talking about OSNEXUS (Quantastor)

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by XeonSam, Sep 9, 2018.

  1. XeonSam

    XeonSam New Member

    Joined:
    Aug 23, 2018
    Messages:
    25
    Likes Received:
    13
    Is it just me or does no one here use OSNEXUS for SAN/storage. QuantaStor Software Defined Storage

    It's a paid SAN solution (software defined yada) which has a community version that allows up to 10TB of storage. You can email them and get that bumped up to 100TB. It's ZFS on Linux with support for FC which is rare... and you don't need HBA's like FreeNAS, you can actually use Raid cards with BBU's.

    The company is small, but the support is pretty quick if you mail them directly, even for the community edition. It had a lot of bugs back in early 2017 but it's much more stable now. And if you're a linux man, there's quite a lot of customizing you can do (can't stand solaris or freeBSD... so difficult).

    FreeNAS is my NAS of choice ofcourse but for iSCSI and FC, OSNEXUS seems to do the job pretty well. Course I would think twice if I were to use it for production but for home use; can't complain.
     
    #1
    ecosse likes this.
  2. kapone

    kapone Active Member

    Joined:
    May 23, 2015
    Messages:
    611
    Likes Received:
    240
    Not knocking Quanta per se...but...lately, it seems every kid on the block comes up with a storage "solution" and thinks they are the best thing since sliced bread.

    I'm in the middle of planning a production storage system (with a budget of ~$2M) and I've been researching and meeting with vendors for over 2 months. And I'm still not convinced any of thier offerings have any compelling advantages. Lots of buzzwords, very little meat.
     
    #2
    T_Minus and gigatexal like this.
  3. ecosse

    ecosse Active Member

    Joined:
    Jul 2, 2013
    Messages:
    333
    Likes Received:
    53
    In general I probably agree but I think the OP's point was around the home use angle.
     
    #3
  4. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,716
    Likes Received:
    396
    At a couple of million if you want either IOPs or capacity I would be only entertaining the large commercial offerings be that built on whatever technology you feel like the flavor of.

    Back on topic I guess most people generally either want free open source or at least free or they want something that helps with skills in the workplace. Most of these vendors are host hoping to get purchased by a bigger company I think... already a lot of storage solutions out there right now and unless they have something super special in the end only the big will survive.

    For storage and base network you tend to want the most reliable option, not generally a place most people are happy with much risk even if that risk is just a support risk.
     
    #4
  5. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,476
    Likes Received:
    4,422
    The reason you do not want HBAs on FreeNAS is the same you do not want HBAs on ZFS on Linux or Solaris. You "can" use them, but you do not want to.

    They have ZFS, GlusterFS, and Ceph at least.

    What we need is an updated FreeNAS based on Linux. Or more accurately, a Proxmox VE plug-in/ release that closes the gap on storage management from the GUI.
     
    #5
    T_Minus and Monoman like this.
  6. kapone

    kapone Active Member

    Joined:
    May 23, 2015
    Messages:
    611
    Likes Received:
    240
    My requirements aren't completely off the charts. Specifically:

    - Be able to handle a 40gbps stream of data (this is based on our business analysis). The storage system is fed by anywhere from 10-20 compute nodes.
    - ZERO downtime. And I mean zero. If that means running xxx storage nodes, so be it.
    - ZERO loss in throughput in any failure scenario. And I mean zero. The system must run at at it's expected throughput, regardless of failures.
    - Capacity isn't even that big. ~30TB, which is transient, and changes fairly often.
    - Be able to backup the data locally to a different set of system(s) as well as to the DR site, which is a replica of the main system.
    - The networks, power, etc etc are all redundant in both sites, and they will be in Tier1 data centers.

    I've met with a fair number of the big names (like Nutanix, EMC etc etc) and while they talk marketing really well, everytime I start asking pointy questions, they go "umm...umm...why don't we get back to you?".
     
    #6
  7. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,716
    Likes Received:
    396
    When you say compute nodes are you taking HPC or enterprise apps ?
    If HPC a GPFS is really where it’s at for commercial solutions, for say ESX etc then different options and yes you have to ask some hard questions and I know the kind of answers you may be getting and probably should be prepared essentially every solution will be a small concession in terms of requirements.
     
    #7
  8. XeonSam

    XeonSam New Member

    Joined:
    Aug 23, 2018
    Messages:
    25
    Likes Received:
    13
    I have never seem any merits on ceph. I know it's the rage for scale out and ofcourse it's not aligned with my use case but I have yet to see the merits.
     
    #8
  9. kapone

    kapone Active Member

    Joined:
    May 23, 2015
    Messages:
    611
    Likes Received:
    240
    Not to take the thread off track...

    The compute nodes are enterprise apps that run custom code baremetal on Linux (We could even run on *BSD). All nodes essentially run the same app but with different dataset(s).
     
    #9
  10. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,716
    Likes Received:
    396
    Would always choose GPFS over Ceph if the $$ was not an issue. Ceph certainly has its place though.
     
    #10
  11. fibrewire

    fibrewire New Member

    Joined:
    Feb 6, 2019
    Messages:
    1
    Likes Received:
    0
    I've been using QuantaStor in production since February of 2011, and it hasn't let me down since. Steve Umbehocker has really helped me through the details in the beginning, and I helped vet the system since its early BTRFS days. I was really blown away by its performance and still beg for additional features to be included in the trial version on an annual basis.

    Some really great features are:
    * high availability cluster which they call "storage grid"
    * Store data in the cloud across multiple inexpensive providers like S3 and Google Drive
    * native fibrechannel support
    * native RAID card support with gui access to raid commands & status (run raid functions like rebuild, etc)
    * Ability to granularly restore files from past snapshots easily (requires each container config per VM)
    * regularly see 2X to 50X storage efficiency depending on VM similarity at the block level
    * Host machines don't need a local OS, they can boot directly from and attach to Quantastor storage via iSCSI HBA (most gigabit ethernet on servers include this feature)

    And the real kick in the stomach is...

    *** Easily get 2X the IOPS and beyond on read/write over FreeNAS due to RAID controller use instead of HBA, also no headaches when hotswapping a disk.

    Here are a couple of screenshots from 2011.
     

    Attached Files:

    #11
  12. WaltR

    WaltR New Member

    Joined:
    Feb 12, 2019
    Messages:
    2
    Likes Received:
    1
    I'd look at Oracle's ZFS Appliance.
     
    #12
  13. m4r1k

    m4r1k Member

    Joined:
    Nov 4, 2016
    Messages:
    45
    Likes Received:
    5
    Well kinda yes and no.
    ZFSSA people have been laid off massively in September 2017.

    If the requirements are really 40Gbps of I/O, the solution can only be an higher storage tier like a vMAX and an Hitachi. Maybe I’m wrong but I don’t believe any midrange solutions can handle 40Gbps nonstop.

    The reason why Dell-EMC Storage people were kinda insecure is due to some of those requirements. I mean, you can get an Hitachi used by some banks as the Mainframe backend that can handle I don’t even know how many millions of IOPS and it’s 100% nonstop. But you’re gonna pay what 4/5 million euro, if not more?
    And the guy here had 20 compute nodes ....
    I doesn’t make any sense whatsoever.
     
    #13
  14. WaltR

    WaltR New Member

    Joined:
    Feb 12, 2019
    Messages:
    2
    Likes Received:
    1
    The ZS7-2 high end All Flash config should be able to handle it. 3 years ago their hybrid NAS was threatening Hitachi and vMAX for throughput. The previous version (ZS5-2) got a SPC-2 MBPS™ rating of 24,397.12 MBPS over Infiniband in 2017. That's close to 200 Gbs. It's a synthetic spec, but that's a lot of headroom.

    There's always a lot of FUD around the ZFSSA, but Oracle eats its own dog food. Oracle and Oracle's cloud run exclusively on the ZFSSA. I wouldn't worry about stability and longevity. They almost exclusively market to Oracle database customers, and put a lot into Oracle DB integration, that doesn't take away from the from it's utility as a general purpose NAS.

    One I like that they use industry standard components you plug drives you bought from Amazon if you wanted to. Try that with an EMC system.
     
    #14
    tjk likes this.

Share This Page