Vms wont start after removing Node from cluster

Discussion in 'Linux Admins, Storage and Virtualization' started by Albert Yang, Dec 5, 2018.

  1. Albert Yang

    Albert Yang Member

    Joined:
    Oct 26, 2017
    Messages:
    31
    Likes Received:
    0
    Hi,
    I was wondering if someone could lend me hand. Im trying to remove the node from cluster which i did then i try to delete /etc/pve/nodes/nodename but im getting permission denied not sure why

    now im getting activity blocked
    not sure if im screwed i completely lost contact from the 2nd node so i have no way to get up the quorum, is there a way i can separate this node without losing any vms? This node is the principal

    Code:
    root@prometheus:~# pvecm status
    Quorum information
    ------------------
    Date:             Wed Dec  5 18:53:23 2018
    Quorum provider:  corosync_votequorum
    Nodes:            1
    Node ID:          0x00000001
    Ring ID:          1/52
    Quorate:          No
    
    Votequorum information
    ----------------------
    Expected votes:   2
    Highest expected: 2
    Total votes:      1
    Quorum:           2 Activity blocked
    Flags:
    
    Membership information
    ----------------------
        Nodeid      Votes Name
    0x00000001          1 192.168.3.252 (local)
     
    #1
  2. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    1,892
    Likes Received:
    856
    How many nodes in your cluster before this happen?
    How many nodes left in the cluster after removing 1 node?

    Looks like you have only 1 node in the cluster now.
    It would be bad if you have only 1 node.
     
    #2
  3. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,799
    Likes Received:
    792
    Yea it looks like you only have a single node. It's blocking you because a single node isn't a Proxmox cluster.
     
    #3
  4. Albert Yang

    Albert Yang Member

    Joined:
    Oct 26, 2017
    Messages:
    31
    Likes Received:
    0
    Thanks for the reply,

    So this is what i did

    added to /etc/hosts the IP of each proxmox by name

    then on the master node i ran



    Code:
    pvecm delnode prometheus2
    then after that i was trying to delete

    Code:
    /etc/pve/nodes/<nodename>
    i was getting permission issue as it still apears on the WebGui

    Then
    ran this
    Code:
    pvecm expected 1
    
    but still appears the node 2 so i ran this
    Code:
    service pvestatd restart

    after that the node 2 disappeared

    im able to start the VM

    the only thing on the summary part i see this

    Cluster: myclustername, Quorate: No

    guess i cant remove that either

    on a side note. so lets say i have 2 nodes in a cluster. and hypothetically both nodes shuts down but one of the nodes dont turn on but the other one does. The node that turns on wont turn on the Vms because it waiting for the quorum to start but it will never start because the another node died. So in that case one has to result to the above?
     
    #4
  5. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    1,892
    Likes Received:
    856
    Answer is True.

    In Proxmox forum, if someone asked a question about running a 2 nodes cluster, they always recommended DO NOT do it.
     
    #5
  6. Albert Yang

    Albert Yang Member

    Joined:
    Oct 26, 2017
    Messages:
    31
    Likes Received:
    0
    ooOoo now i understand why, having 3 nodes if 1 fails it keeps working? but lets say if 2 fails and 1 works same concept?
     
    #6
  7. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    1,892
    Likes Received:
    856
    1 failed node in a 3 nodes cluster , buy you time to repair the failed node. You have time to for lunch.

    2 failed nodes in a 3 nodes cluster, you do not disturb the reaming functional node , you pray , skip lunch to repair the failed nodes.
     
    #7
    Albert Yang likes this.
  8. Albert Yang

    Albert Yang Member

    Joined:
    Oct 26, 2017
    Messages:
    31
    Likes Received:
    0
    i guess your right the more the better. i guess for now im going to run with pve-sync a bit more of a hassle but i dont depend on another node
     
    #8
Similar Threads: wont start
Forum Title Date
Linux Admins, Storage and Virtualization Memory usage madness (oom-killer starts at 50% of ram) Jan 30, 2017
Linux Admins, Storage and Virtualization One command to start interface in dhcp mode? Jan 12, 2017

Share This Page