Hi to all.
I’m new to opennebula…
Thank you in advance for any response (if any).
When i make a snapshot (from sunstone) on any VM opennebula pause the VM and take long time to make snapshot. Say 3 to 7 minutes for 25GB qcow2 files (on NFS filesystem).
When i have changed some datastore parameter i don’t remembre i have received this error:
“Error creating new disk snapshot: Cannot perform a live DISKSNAPSHOTCREATE operation”.
I’m not able to make live snapshot. And i’m asking why i t took too much time for a “paused” snapshot.
I have followed info in another thread: How to make a live disk snapshot
All seems fine to me (kvm/qcow2).
Where i’m wrong?
Thank you to all… for patience and for this incredible piece of software.
Best regards
Mirko
Opennebula 5.4.6 on all debian 9 (nodes, shared storage and sunstone (ha)). KVM + Qcow2 on NFS share.
I have tried many times and this what i’ve found: it’s KVM “problem”.
Live VM snapshot (Disk+RAM+Status) work correctly. I’ve tried on my desktop simulating KVM subsystem (with virt-manager GUI). VM is paused while RAM is copied. If i configure cache=none writing RAM to disk is very very slow. With cache=writeback it’s much faster. For security i’ve tried live migration while inseide VM dd to disk and work well (with writeback). My problem is KVM related. Live VM snapshot work correctly now… VM is paused for 6 seconds for 8GB RAM (300M used).
But i have others questions…
In my setup when i try to take a disk live snapshot (only disk from the sunstone storage TAB) i have this error:
Sun Apr 8 11:13:11 2018 [Z0][VM][I]: New state is ACTIVE
Sun Apr 8 11:13:11 2018 [Z0][VM][I]: New LCM state is DISK_SNAPSHOT
Sun Apr 8 11:13:11 2018 [Z0][VMM][E]: Error creating new disk snapshot: Cannot perform a live DISKSNAPSHOTCREATE operation
Sun Apr 8 11:13:11 2018 [Z0][VM][I]: New LCM state is RUNNING
Sun Apr 8 11:13:11 2018 [Z0][LCM][E]: Could not take disk snapshot.
Why this error?
When i take a live vm snapshot (disk+RAM+status) and migrate my vm snapshot disappear. But in “qemu-img info dsik.0” i see it. I’ve read that this is normal on documentation but it seems not intuitive. Why this limitation? Why snapshot not persist after a VM migration?
If i take 2 live VM snapshot i see 2 snapshot on qcow file: snap-0 and snap-1. If i migrate VM these snapshot remain on qcow2 fdisk.0 file. Sunstone show no snapshot. When (on new node) take one snapshot only snap-0 is remove and recreated. snap-1 stay here. It’s normal? It seems a bug…
Thank you for reading and thank you in advance for any response…
Regards
Mirko
Hi to all.
I’m responding to myself hoping that my experience is useful for someone…
I’ve found a solution to point 2.
In oned.conf on the VM_MAD kvm i set KEEP_SNAPSHOTS = “yes” from “no”.
For the default file it’s on line 506.
I don’t know why the default is “no”. But it seems to work.
That’s a good hint - i changed the cache mode from writetrough to none, increased the memory from 8GB to 16GB on the vm and doubled the size of the swap file.
Before these actions the vm live snapshot (HOTPLUG_SNAPSHOT) took about 3-5 minutes. Now (on NVME SSD Storage !) it takes abut 55 minutes to complete.
In this scenario it makes not much sense to do a live snapshot … i could also just make an offline backup of vm Disk, which takes - more or less - the same time to complete.
About the behave of OpenNebula and the KEEP_SNAPSHOT option … that’s a goot hint, too.
I think OpenNebula will NOT handles the qemu-img snaps best, because - although when (default) option KEEP_SNAPSHOTS is off, OpenNebula should cleanup also the “pyhsical” qemu-img snapshot stuff - if not doing so - old qemu-img snapshot keeping old space, which then is unvisible from OpenNebula. The unexperienced OpenNebula admin would never be aware of that issue, that the image does occupy “unused” space.