Oneimage conversation bug in ceph

Hello

I just testing the migration from xenserver to opennebula via vmware.
For this I migrated the xenserver vm to vmware, installed the virtio drivers
and copied the vmdk to nfs server for importing into ceph storage of opennebula

Converted the vmdk to qcow2
qemu-img convert -f vmdk -O qcow2 VM-File-flat.vmdk vm-file.qcow2

The file is now in qcow2 format:
qemu-img info vm-file.qcow2
image: vm-file.qcow2
file format: qcow2
virtual size: 60G (64427655168 bytes)
disk size: 35G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false

The I imported the file into datastore using oneimage create
oneimage create --name “vm-01-system” --path /mnt/transfer/vm-file.qcow2 --driver qcow2 --type OS --datastore cephImages --persistent

But when try starting a vm from template the error occures: Image is not in qcow2 format
So I was wondering about that.
Exported the disk image from rbd storage directly into image file.
The image has now become raw format:
qemu-img info testexport.img
image: testexport.img
file format: raw
virtual size: 60G (64427655168 bytes)
disk size: 39G

Why? I explicitly defined qcow2 as storage format.
Why does oneimage reconvert the image back to raw?

Update:
If I subsequently set the drivers to “raw” in the template and image, the VM starts.
But I still need to use the QCOW2 functionality.
The big question is: Why is oneimage reconvert the existing qcow2-image back to raw when its imported into datastore.

I am not a ceph expert but…

QCOW2 is a file format. Ceph RBD is a block device format. I am just curious what QCOW2 functionality you are expecting to work when it is exposed to the VM as a ceph block device?
Also, did you read the following note from QEMU and Block Devices — Ceph Documentation :

Because the ceph driver converts the image to raw during the import:

Hope this helps.

Best Regards,
Anton Todorov

1 Like

Hey

Thanks for the clarification. I made some tests and imports. I primarily had the problem that oneimage create with driver raw gaves me always errors as “cannot identify header”, “incompatible header” etc.
After converting it twice to raw, it can imported directly as raw.
But one problem remains, if its always converts images to raw, why does it set the driver setting in the image to qcow2?

It its always raw, then this opens a huge problem: raw format seems not to support snapshots and gives just the error message:

Message received: LOG I 3 error: unsupported configuration: internal snapshot for disk vda unsupported for storage type raw
[Z0][VMM][D]: Message received: LOG E 3 Could not create snapshot for domain one-3.
[Z0][VMM][D]: Message received: SNAPSHOTCREATE FAILURE 3 Could not create snapshot for domain one-3.

The qcow2 functionality is therefore from my point of view absolutely required, because of the snapshots.
The users / backup must be able to create snapshots from the vms through qemu, to inform the daemons and filesystem in the vm (via guest agent) to flush data before the snapshot is made. So, snapshots made from outside qemu (via onevm disk-saveas or rbd export) do not inform the vm about the snapshot which almost certainly leads to a broken filesystem within the snapshot.

Another major weakness is that users cannot create snapshots themselves.

Are there alternative solutions on opennebula for the ceph/rbd and snapshot problem?

By the way, proxmox handles rbd, qemu and snapshots without any problems.