Proxmox VE - qcow2 to Ceph storage workaround

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Today I wanted to get one of the Zabbix pre-made images up and running in the new Promxox VE 4.0 cluster. Since the primary cluster storage (and what makes it very easy to get a HA VM up and running) is Ceph.

If you missed the main site Proxmox VE and Ceph post, feel free to check that out.

The Zabbix image for KVM comes in a qcow2 format. Proxmox VE unfortunately lacks the really slick image import that you have with Hyper-V or ESXi. So here is what I did:

1. Create a new blank VM with a qcow2 disk format
2. I downloaded the Zabbix image from the Sourceforge direct link and overwrote the standard image.

Proxmox download Zabbix qcow2 and overwrite VM.JPG 3. I then went to the VM's Hardware tab and used the Move Disk feature and selected my Ceph pool. It was off and running!

Proxmox Migrating qcow2 image to Ceph.JPG

4. After the copy was complete, there was a final cache selection screen before it was done:
Proxmox qcow2 image to Ceph migration final step.JPG
5. At this point everything boots normally. Hello Zabbix
Proxmox boot off of Ceph storage - Hello Zabbix.JPG


The big trick to this whole thing was to make the VM on local storage, then just migrate it to Ceph. I did try a post-Ceph migration to a different node before booting. It took all of 1-2 seconds and booted perfectly.

I hope that helps someone.
 

Joe Baker

New Member
Apr 3, 2017
1
0
1
58
Sunnyvale, CA
linkedin.com
Maybe it's because I've created my system on ZFS from the installer, but I don't seem to be able to specify what type of file will be able to represent my new virtual hard drive in Proxmox 5.0b1 . I'm just downloading the Appliance install ISO image of Zabbix 3.2 instead and see how that goes.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
"ZFS" storage on Proxmox only allows raw storage (same as Ceph). But you can create a 'filesystem' storage group on the ZFS filesystem where you should be able to select qcow.

Sent from my SM-G950U using Tapatalk
 
  • Like
Reactions: T_Minus

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Ceph uses compatible file system at the back end, there's no raw storage AFAIK.

Hard Disk and File System Recommendations — Ceph Documentation

"Ceph OSD Daemons rely heavily upon the stability and performance of the underlying filesystem."

"ZFS" storage on Proxmox only allows raw storage (same as Ceph). But you can create a 'filesystem' storage group on the ZFS filesystem where you should be able to select qcow.

Sent from my SM-G950U using Tapatalk
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
My comment wasn't about how Ceph works - it was about how Proxmox uses Ceph.

When using Ceph, Proxmox uses a simple RBD for VM images (Rados Block Device, or simulated block device). When using ZFS if uses a ZVOL (also a simulated block device).

In both cases it doesn't give you an option for "image format" because it stores the VM image in what KVM thinks is a "raw" format image and Proxmox simply maps it onto the simulated block device (not surprisingly, this is the same way you would map a VM image onto a physical hard disk).
 
  • Like
Reactions: T_Minus