VSAN 2 Node Cluster

Discussion in 'VMware, VirtualBox, Citrix' started by ecosse, Aug 8, 2019.

  1. ecosse

    ecosse Active Member

    Joined:
    Jul 2, 2013
    Messages:
    356
    Likes Received:
    59
    Wondering if anyone is using it in 6.7 u2 - I really want dedupe but the more I read about it the more it sounds like an over-complex pile of whatever - as it was last time I looked at it.

    What I'm really interested in is if lose a host for a while does it cope and it is a pain to bring the host back on and return to resilience? Look like a pain in the past, is it still the case?
     
    #1
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    Have not used 2node in ages, but wouldnt want to either - whats the use case?
    O/c its not 2 node at all since you need a witness - so in your example - does the witness remain or not?
    If its there then its not too bad, since you still have 2 out of 3, but if its gone/down/rebooted you are in trouble (depending on specific config, but if you run part of the needed vms (pdc, vcenter, dns) on that vsan cluster then...)
     
    #2
  3. ecosse

    ecosse Active Member

    Joined:
    Jul 2, 2013
    Messages:
    356
    Likes Received:
    59
    Thanks. I realise the "node" count errr count. I just want to reduce my electricity bill. The trouble with the 2 node cluster is unless tiny is chosen the size of the witness is pretty large - at least for a home lab. At that size may as well go to a 3 node.

    Anyway, for now going to stick with my tried and trusted lefthand setup. I can run the witness on a g7 microserver - it doesnt need a cray!
     
    #3
  4. Ojref1

    Ojref1 New Member

    Joined:
    Oct 8, 2018
    Messages:
    12
    Likes Received:
    1
    I wouldn't use VSAN to store lolcat pictures on, but that's my humble opinion.
     
    #4
  5. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    377
    Likes Received:
    28
    Are you wedded to VSAN specifically? I have a 2-node 6.7 cluster, and am doing storage with the starwind free VSAN appliance. Works well, and for my 1TB NVME mirror, it takes about 1/2 hour to do a full sync. Much less if the downtime is only a few minutes (it has some kind of fast sync mode, if there isn't a lot of data being written after the one side goes down...)
     
    #5
  6. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    I am reasonably happy with my 4 node cluster - at least for stability, not talking about speed here;)
     
    #6
  7. ecosse

    ecosse Active Member

    Joined:
    Jul 2, 2013
    Messages:
    356
    Likes Received:
    59
    Not at all - I use an HPE storvirtual VSA at the moment but I wanted something that does dedupe. I'm reticent to go down the starwind route because of the lack of a GUI - do you find using powershell scripts sufficient?
    Plus to be honest the setup instructions were a bit amateurish - lots of conflicting advise on the forums - didnt have time to figure out the right from the wrong (a bit like VSAN, how many acronyms do you need in a design?!)
     
    #7
  8. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    377
    Likes Received:
    28
    You can use the GUI if you get an NFR license. I did, since this is a home lab.
     
    #8
  9. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    They hand that out only once though, for a period of one year, don't they?
    At least that's what I was told when I requested one again after not using mine during that year;)
     
    #9
  10. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    377
    Likes Received:
    28
    I wasn't aware they didn't renew them. That said, my use case is a fairly simple one: HA datastore for a vsphere cluster, so once I get it set up, I don't really care if I can't tweak the config anymore...
     
    #10
  11. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    Thats what I wanted it for too - but similar to @ecosse 's experience I found the manuals horrible/inconsistent/outdated.
    I have tried several times but never really caught the mindset.
    O/c some of these tries had been with the (new back then) linux version so some of my documentation problems might stem from that, but I think even when I got it to run the performance was not as hoped.

    But I derail, sorry - how is VSA performance wise? Or is stability your primary concern?
     
    #11
  12. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    377
    Likes Received:
    28
    What was your setup? e.g. devices on both sides? What speed was the sync link? My use case is primarily stability (and also being able to reboot one side for patches/etc without having to svmotion everything off the datastore.) I have two NVME devices on each side, as ReFS mirrors. Write performance is about 1/2 of read performance, but then again, I have a 50gb sync link :) AFAIR, starwind recommends at least a 10gb link for synchronization.
     
    #12
  13. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,591
    Likes Received:
    543
    I think I ran them virtual on a single host, and if not I have 40/56 gb links.
    Cant remember all the details but I used either a bunch of S3700s or a pair of optane 900p's - more likely the latter for a PoC
     
    #13
Similar Threads: VSAN Node
Forum Title Date
VMware, VirtualBox, Citrix 2 node vsan for production? Mar 15, 2018
VMware, VirtualBox, Citrix Vsan build single fattwin (4node) vs 4 1us Oct 7, 2017
VMware, VirtualBox, Citrix Vmware VSAN nodes Feb 9, 2016
VMware, VirtualBox, Citrix Do you have to use a cache tier with vSAN? Saturday at 4:54 PM
VMware, VirtualBox, Citrix In vSAN all-flash, can I stripe two PCIe caching devices together? May 16, 2019

Share This Page