Well at firs off top of my head, it should support both shared and ssh because probably the shared filesystem mode is already in use. It do complicate the driver but is doable.
Beside that, for the live migration the scripts should be altered:
premigrate - should scp the VM’s home folder to the destination
postmigrate/failmigrate - should cleanup source/destination folders accordingly
The normal migration should do scp of the VM’s home folder too, but a special handling should be taken for the host failure case - when a VM is rescheduled while in UNKNOWN state.
The monitoring whould need some touches here and there…
We don’t need to move VM’s folder itself, we just need to preserve checkpoint file, which can be easily uploaded to the shared LV by save.fs_lvm and downloaded back by restore.fs_lvm scripts, all other content can be generated automatically. Eg context-cd and deployment files are generated each now run, symlinks to physical devices can be calculated from the SRC_PATH and DST_PATH as I wrote above.
I want to leave support for both ssh and shared images datastores, but I think that it is should always be converted to the logical volumes.
Main problem is that for transferring images from the images datastore to system datastore always used tm_mad that assigned to images datastore, but not for system one
Since the first releases of opennebula the LVM driver has used different approaches, using a pure LVM setup, including cLVM. At some point the interest shifted to the current hybrid approach that has an image repository in plain file format and populates logical volumes for the VMs to run from there.
I’d say that a pure volume based driver stilll make sense. However I’d not rewrite the current fs_lvm driver but create a new pure lvm driver. This could be initially and addon that could be potentially included in the main distribution.