Requirements to adapt OpenNebula’s LVM datastore for local LV-based storage with SSH transfer support

Hello,
I’m looking for guidance on what’s required to modify OpenNebula’s LVM datastore driver so it can operate with local storage instead of shared SAN.

Right now the LVM driver is designed for shared SAN volumes exposed are as documented in SAN Datastore — OpenNebula 6.10.4 documentation .
What I want instead:

  • The Front-End connects to the target hypervisor via SSH

  • Copies the image to the HV over SSH

  • Creates the LV inside a predefined vg-one-<ds-id> volume group

  • Deploys the VM locally on that LV

  • Live or cold migration should still work between HVs (as long as each HV has its own vg-one-<ds-id>)

My question:
What are the actual technical requirements or components I need to modify?
Is it enough to extend the TM/LVM transfer scripts, or are there additional dependencies in the image manager, drivers, or migration hooks?

I want to understand the full scope before I start patching anything.

Thanks!

1 Like

Hello,

At a first glance, to work with local LVs the main problem is the migration: migrating a VM would mean migrating the LV to another hypervisor or replicating it with a dd or a similar command.

I should check some more details, but this is the first thing that comes to my mind. Your idea is interesting and can be useful in some cases.

Cheers!

Hello @brunorro ,

Any chance you had a look at this? I tried to do an alternative implementation, but indeed MV seems to be a pain to implement.

Hello,

Sorry, I still didn’t have time to check it out