Hi,
There are few hundred VM running based on NFS and qcow2 image. Now, we would like to move all the VM to CEPH. Is there a smooth migration that can be done? Or is there something like “export” and “import” ?
Hi,
There are few hundred VM running based on NFS and qcow2 image. Now, we would like to move all the VM to CEPH. Is there a smooth migration that can be done? Or is there something like “export” and “import” ?
I would have recommended to check out OpenNebula XML-RPC API, but reading the doc, it seems that uploading an image to your datastore requires to use Sunstone (>3.4).
To migrate your existing VMs to OpenNebula, you need to:
Since you’re migrating existing VMs, you can skip IP configuration and context customization, when creating your template.
I would recommend migrating your VMs progressively, take some times every now and then to check your cluster health, your nodes load average, …
Ceph is a perfect match for OpenNebula, until you end up with read operations timing out, root filesystems remounting as read-only, … Be sure not to overload your ceph, or be prepared to add new OSDs.
Let us know how it goes!
Hi Samuel,
Thanks. That is the process involved if we do it manually. But what i need is to automate the process. My idea is to “export” from existing ONE (4.6) and “import” into new ONE (4.12).
Any suggestion?
No, there is no other way for the moment. Unfortunately you will need to orchestrate the migration manually. This is more or less what I’d do:
And repeat for each VM. You could script this if you have many VMs. Be sure to try this first with a non-productive VM.
Hi @jmelis,
That is what i have in mind also. I have ~700 to be migrated to RBD. Can you assist how to pull the following information (either from DB or using one command) as this really will help us for the migration.
vmid, vmname, vmowner, vmip, templateid, templatename, imageid, imagename
Note: Data should be extracted from VM -> Template -> Image. This way i can ignore those templates and images not in use.
The best way to extract that info is by parsing the XML obtained from onevm show -x <id>
. AFAICT all the info you want is there.
Thanks. Will follow that.