Proxmox VE 4.0 Initial Installation Checklist

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Markus

Member
Oct 25, 2015
78
19
8
By changing the apt-configuration you just change the repository from which the servers receive their packages.
The enterprise repositories are updated more frequently (I think) and for them you need a subscription.

The webinterface Popup just checks if you have a subscription. If not it informs you about this situation (and probably motivates you to buy one...).

The result of the "checklist" is:
You receive all updates of the community version and you must accept the popup.

Regards
Markus
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
By changing the apt-configuration you just change the repository from which the servers receive their packages.
The enterprise repositories are updated more frequently (I think) and for them you need a subscription.

The webinterface Popup just checks if you have a subscription. If not it informs you about this situation (and probably motivates you to buy one...).

The result of the "checklist" is:
You receive all updates of the community version and you must accept the popup.

Regards
Markus
No - his step 1 does not address the issue of the popup. The OP is wrong in this case - that change is required for updates to work at all without a subscription, but still won't remove the popup saying you don't have a subscription.
Got it! Thanks for clearing that up.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,801
113
Yes sorry, I re-read and that is a bit confusing I will edit.
 

Markus

Member
Oct 25, 2015
78
19
8
Just because I fell over an issue while trying to use an own ntp-server...

For correct timesync with an own ntp-server
1. Adjust timesyncd.conf to your needs
Code:
root@bob:~# cat /etc/systemd/timesyncd.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# See timesyncd.conf(5) for details

[Time]
#Servers=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org
Servers=10.0.11.1
2. Restart timesyncd
Code:
root@bob:~# systemctl restart systemd-timesyncd.service
3. Make sure everything is working (NTP-Sync must be true)
Code:
root@bob:~# timedatectl status
      Local time: Thu 2016-05-26 11:42:28 CEST
  Universal time: Thu 2016-05-26 09:42:28 UTC
        RTC time: Thu 2016-05-26 09:42:28
       Time zone: Europe/Berlin (CEST, +0200)
     NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
      DST active: yes
Last DST change: DST began at
                  Sun 2016-03-27 01:59:59 CET
                  Sun 2016-03-27 03:00:00 CEST
Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2016-10-30 02:59:59 CEST
                  Sun 2016-10-30 02:00:00 CET
Regards
Markus
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I think I may have asked this before/eslewhere...

But... How does ProxMox handle shared storage with ZFS and on board SATA ports?
IE: 4 Node Chassis with each node having 1x SSD and 1x HDD and proxmox installed on satadom or ssd tucked someplace.

It would be nice to see a Proxmox storage 'shootout' of sorts :)
ZFS on Linux, Ceph, Gluster, etc.
Briefly touch on pros/cons and basic designs.
 

MikeP

New Member
Feb 20, 2016
4
2
3
54
T_Minus, I'm not sure what you are asking... ProxMox is a bare-metal hypervisor, so you'd install it using ZFS directly on the metal. Then provision virtual disks on top of that.

As for ZFS vs. Ceph, Gluster, etc...

I have found that it is like this:
ZFS - easy to do, reliable in a single node and great for small to medium setups.
DRBD - more complex, reliable across nodes for HA.
Ceph, Gluster - very complex, requires several nodes. Also for HA. AWS storage is based on this type of tech. Not for my home server.

Thus, I've been told that if you have to ask, then you shouldn't go for the complex ones and stick with ZFS. (And, only use ZFS mirror - but that's a different story.)

The ProxMox forums are a great resource.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
@MikeP sorry for vague or improperly worded question...

I'm trying to get a feel how ZFS (plugin?) and proxmox work in a cluster together. Is the created virtual disk shared within the cluster or only on the single host? I apologize if this is in the docs I admit I've only read them once and it was probably close to a year ago now.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
ZFS only provides local storage. The ZFS plug in works well but the storage only exists one the local host and is not shared across the cluster nodes.

Sent from my SM-G925V using Tapatalk
 

MikeP

New Member
Feb 20, 2016
4
2
3
54
The ProxMox docs and forum should be of better help than me...

If I understand correctly...

As PigLover states, ZFS is for local storage, like EXT (comparable to NTFS on Winderz) on the raw disks when installing on the bare metal. By default, it is only on one host.
There are ways to share using NFS or SMB. In ProxMox, they bundle ZFS on the install CD, so while it is called a plugin, it is there by default, and is the default when installing so you don't have to plug it in.
The others (Ceph, Gluster) compete with products like Nexenta, NetApp, and EMC Storage. Very different things.


How many physical boxes are you looking to put in your ProxMox cluster?
If it 1 or 2, then your best best is probably unshared ZFS. Consider each node as a completely separate thing which you can manage with one interface. (I have 3 in my test cluster, but often only one running. I set up a cluster for easy management, but told it to only expect 1 to be working at any given time, so it doesn't 'fail' on me.)
If you have 3, then consider one for storage (using ZFS) and the other two as a cluster with all of the storage on the storage node.
If you have 4, then either duplicate the storage with DRBD or make another node.
If you have 5, then you can do HA on 3 nodes, and DRBD on 2 nodes. If you have 6+ then you are beyond my scope, and Ceph or Gluster may make sense.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
@PigLover & @MikeP thanks that makes a lot more sense I was perplexed about ZFS and pooled/shared storage solutions but my memory or I misread likely contributed to that.

Right now I'm looking at setting up a low cost solution to test/play with proxmox & different storage options... I would be using 2x 4node chassis currently loaded with 2x AMD 8C in each node. I'm trying to work within the limits of the 2 drives per-node vs. other shared storage.

Ultimate goal for this project:
Performance isn't the goal in this build, resilience is. I'd like to be able to have someone on-site just hot-swap a dead drive, do some remote management to rebuild or self-heal and keep on chugging like nothing happened. I'm not sure I can accomplish this with the 2-drive per-node limit with any of the storage systems though.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
you may have a closer look at sheepdog, should work with two drives/node and a decent number of nodes.

also, zfs and zfs-plugin are different.

zfs-plugin can handle different iscsi-targets backed by zfs - allows i.e. zfs-snapshots by ssh, create/delete of zfs-volumes and configuration of lun's on the target etc.

Basically shared storage, exported via iscsi and backed by zfs where the plugin allows to use some zfs-features that would not be available on a pure iscsi storage.

local zfs is just zfs, local storage for the node it resides.