Live Migration with Ceph backend

We have a number of hypervisors and storage nodes configured for using Ceph as the storage backend for OpenNebula.

What are the ways we can get live migration to work as the contextualization disks seem to prevent this?

Could throw light on why you want to perform live migration ?

The reason being

  • Ceph automatically has replication built in. If you have a storage replicated such that it appears in another node, then you are all set.

You can run this ruby code which just runs “ceph osd stat”, “ceph osd dump” and compares it with your storage directory (in our case we call it storage1…n) and print the pgs.

From the pgs you can figure out the affinity of it, and see if they are HA or not.

    require "highline/import"

   
say("<%= color('ceph osd stat:', :green, BOLD) %>")
puts "---------------"

#Total number of osd
osd = system("ceph osd stat")
if osd
  puts osd
else
  say("   Error: ceph is down. <%= color(', to fix :http://bit.ly/megamfix', :m$
end
#Number of osd up

You can read about data durability in ceph from here. http://docs.megam.io/v1.0/docs/megam_ceph_durability

Hi,

Thanks for your response :slight_smile:
There are a couple of reasons why we want to perform live migration, the most major being that we want to be able to drain our hypervisors for maintenance (the others revolving around some interesting things we are hoping to achieve with our batch system).

I know that the ceph backend is highly available however we need to be able to migrate a VM from one hypervisor to another.

Please correct me if I am wrong on the below:
My understanding is that the contextualization disks always reside in the system datastore which is cant be put onto the ceph backend and that when the VM is stored on a ceph backend the live migration will not scp the contextualization disk as it usually would on shared or ssh storage.
I am led to believe that our options are to either have a different form of shared storage (nfs for instance) for the system datastore or to modify the migration script so that it SCPs the context disk and any other required files.

Thanks

We’ve been using the scp transfer manager for the system datastore, using RBD for the VM images. The only issue is we can’t really use swap disks as those are created as flat files in the system datastore and would slow down the migration process a lot.

This mailing list post was of great help when we set it up: http://lists.opennebula.org/pipermail/users-opennebula.org/2013-April/022705.html

True, a shared storage will help, or just replicate the partition /var/lib using drbd. You can use ONE hooks to migrate VMs automatically to another host. (All you have to do is setup the hook, and turn of the host you wish take maintenance.)