Shared mode for the ceph based system datastires

Hi Anton,

Heh, I was need to execute these actions locally on the frontend side, so I’ve just owerrided them as local actions for the kvm vm_mad driver:

 VM_MAD = [
     NAME           = "kvm",
-    ARGUMENTS      = "-t 15 -r 0 kvm",
+    ARGUMENTS      = "-t 15 -r 0 kvm -l save=save_linstor_un,restore=restore_linstor_un",
 ]

Otherwise I don’t like this idea much, looks like that vmm/save.<TM_MAD> and vmm/restore.<TM_MAD> actions are more concern to TM drivers than to VMM ones. TM driver actions are always executes locally on the frontend therefore vmm/save.<TM_MAD> and vmm/restore.<TM_MAD> actions also should be executed locally in my opinnion.
It maybe done by modifying one_vmm_exec.rb driver executor. Like we’ve decided here

This step will allow to handle these actions by TM_MAD manufacturer without adding extra patches into standard VMM driver.
This is a little breaking changes, so it’s up to dicussion.

To be true I don’t like an idea to store whole VM directroy on the storage system, filesystems are usually not so reliable unlike just using block devices. They can hung on mount/unmount operations. Better to avoid using them. In my opinion all the information stored there (symlinks and context CDs) can be calculated automatically during VM deploy.

The only thing which should be saved is checkpoint file, however it can be simple uploaded as block file. Any way, copying checkpoint file to one location, then upload it into storage, and same way for restore it back is so annoying. I want to solve it somehow.

I’m just checked: virsh can directly save checkpoint onto block device, restore is more problematic, but still possible:

virsh -c qemu:///system restore one-58 < <(cat /dev/<device>)

I would like to develop this idea, instead just saving it into filesystem which is stored on the shared block device. :slight_smile: