1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Is anyone on ScaleIO 2.0 yet?

Discussion in 'Commercial NAS Systems' started by Patrick, Mar 30, 2016.

  1. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    8,657
    Likes Received:
    2,458
    #1
  2. cesmith9999

    cesmith9999 Well-Known Member

    Joined:
    Mar 26, 2013
    Messages:
    789
    Likes Received:
    255
    not yet. wondering if I can do an in-place upgrade... or I migrate off and build new...

    Chris
     
    #2
  3. Jannis Jacobsen

    Joined:
    Mar 19, 2016
    Messages:
    153
    Likes Received:
    19
    Is this something I should look into for my home lab?
    Will it give better performance if i dont have ssds etc in the storage setup?

    Edit: found some more information, and I dont have enough esxi servers at home to use this :)
     
    #3
    Last edited: Mar 30, 2016
  4. dswartz

    dswartz Member

    Joined:
    Jul 14, 2011
    Messages:
    285
    Likes Received:
    21
    I don't believe all N nodes (minimum 3) need to be esxi. Do you have a 3rd host you could throw centos or whatever on?
     
    #4
  5. Jannis Jacobsen

    Joined:
    Mar 19, 2016
    Messages:
    153
    Likes Received:
    19
    That could probably be a solution.
    Need to look into this :)

    -Jannis
     
    #5
  6. cesmith9999

    cesmith9999 Well-Known Member

    Joined:
    Mar 26, 2013
    Messages:
    789
    Likes Received:
    255
    there is no association with your compute and storage nodes. you can have 5 hosts for storage use 2 of those for ESXi/HyperV and other compute nodes. ScaleIO does not care.

    Chris
     
    #6
  7. badskater

    badskater Active Member

    Joined:
    May 8, 2013
    Messages:
    111
    Likes Received:
    41
    I just deployed it in the lab at work. (Gave him some RDM from the NetApp, VNX and Dell storages just to test) For now, it looks really good. (We don't have nodes that have internal storage. We're full of blades after all...)
     
    #7
  8. stupidcomputers

    stupidcomputers New Member

    Joined:
    May 27, 2013
    Messages:
    18
    Likes Received:
    19
    Followed the upgrade guide to go from 1.32.2 to 2.0 and everything worked as expected with zero downtime. I run this on a 3 node vmware vsphere 6 update 2 cluster in my home lab. The hardest part was locating the correct older software packages from 1.32.2 to load into the gateway server during the upgrade process. The steps went something like:

    1. Unregister old scaleio vcenter plugin
    2. Register New plugin
    3. Change option within new scaleio plugin for unsecure mode and reregister.
    4. Upgrade required packages on sds nodes (java and openssl)
    5. Load new scaleio 2.0 packages to the gateway
    6. Load old 1.32.2 packages to gateway
    7. Deploy sds upgrade from the gateway and reboot nodes one at a time.
    8. Upgrade other required packages
    9. Upgrade SDC vsphere packages
    10. Enable secure communications
    11. Manually deploy extremeio cache packages
    12. Upgrade GUI tool

    The single best benefit I can find for a tinker'er like me is the planned maintenance mode. Rebooting an SDS node while it is in maintenance mode will not cause an immediate rebuild. The new integrated ssd caching really seemed to improve my array's responsiveness as well.

    Steve
     
    #8
    Last edited: Apr 2, 2016
    gigatexal and Patrick like this.
  9. spazoid

    spazoid Member

    Joined:
    Apr 26, 2011
    Messages:
    87
    Likes Received:
    9
    @stupidcomputers

    Do you have some more information on your setup (maybe including benchmarks)?
     
    #9
  10. stupidcomputers

    stupidcomputers New Member

    Joined:
    May 27, 2013
    Messages:
    18
    Likes Received:
    19
    Here is some info:
    Node1: 1 Xeon x5650, X8DTN+, Dell H310 IT mode flash, 32GB, 8x 7.2k sata, 1 250GB 850evo, connectx-2 10g
    Node2: 1 Xeon L5630, X8DTN+, Dell H310 IT mode flash, 32GB, 8x 7.2k sata, 1 250GB 850evo, connectx-2 10g
    Node3: 1 Xeon L5630, X8DTN+, Dell H200 IT mode flash, 32GB, 8x 7.2k sata, 1 200GB SAS OCZ, connectx-2 10g

    Sata drives are random sizes between 1T and 5T. Using 3x Supermicro CSE-826 chassis with 6gb sas2 backplane.

    Recently added the Dell 6224 switch with 2x dual xfr 10gb modules. Averages 600 watts at the wall.

    [​IMG]
    [​IMG]

    Benchmarks. Read latency as reported by vmware was under 20ms during testing. Write latency spiked to 40ms+.

    Almost 5000 iops!
    [​IMG]
    866MB/sec
    [​IMG]

    atto results running single test:
    [​IMG]

    atto results running 2 tests simultaneously:
    [​IMG]
     
    #10
    Last edited: Apr 3, 2016
    NME, gigatexal, Chuntzu and 3 others like this.
  11. zeynel

    zeynel Active Member

    Joined:
    Nov 4, 2015
    Messages:
    240
    Likes Received:
    41
    @ stupidcomputers, thx for sharing , i will test it also. ScaleIO seems very cool.
     
    #11
  12. Jake Sullivan

    Jake Sullivan New Member

    Joined:
    Oct 9, 2015
    Messages:
    16
    Likes Received:
    23
    My 8 node cluster (DR site) went up to 2.0 three weeks ago. The original configuration had some issues with LIA so I ended up having to rebuild that portion. Otherwise, the install went smoothly with no downtime. Anyone else digging the new ease of use changes in the GUI (volume resize, etc) ? I saw maintenance mode mentioned earlier - agreed 100%.
     
    #12
    Chuntzu likes this.
Similar Threads: anyone ScaleIO
Forum Title Date
Commercial NAS Systems Anyone tried Synology's new BTRFS capable NAS's? Jun 7, 2016
Commercial NAS Systems Is Synology.com down for anyone else? Mar 25, 2016
Commercial NAS Systems EMC ScaleIO question/understanding about faulsets Nov 23, 2016

Share This Page