Migrate Live fail: error: unsupported configuration: Unable to find security driver for label apparmor

Hi,

Anybody know how to address this error? Appreciate any thoughts.

Mon Mar 30 16:14:49 2015 [Z0][LCM][I]: New VM state is RUNNING
Mon Mar 30 16:21:37 2015 [Z0][LCM][I]: New VM state is MIGRATE
Mon Mar 30 16:21:37 2015 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_premigrate.
Mon Mar 30 16:21:39 2015 [Z0][VMM][I]: ExitCode: 0
Mon Mar 30 16:21:39 2015 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Mar 30 16:21:40 2015 [Z0][VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/migrate 'one-81' 'node-002.dc1.xxx.com' 'node-104.dc1.xxx.com' 81 node-104.dc1.xxx.com
Mon Mar 30 16:21:40 2015 [Z0][VMM][E]: migrate: Command "virsh --connect qemu:///system migrate --live one-81 qemu+ssh://node-002.dc1.xxx.com/system" failed: error: unsupported configuration: Unable to find security driver for label apparmor
Mon Mar 30 16:21:40 2015 [Z0][VMM][E]: Could not migrate one-81 to node-002.dc1.xxx.com
Mon Mar 30 16:21:40 2015 [Z0][VMM][I]: ExitCode: 1
Mon Mar 30 16:21:40 2015 [Z0][VMM][I]: Failed to execute virtualization driver operation: migrate.
Mon Mar 30 16:21:40 2015 [Z0][VMM][E]: Error live migrating VM: Could not migrate one-81 to node-002.dc1.xxx.com
Mon Mar 30 16:21:40 2015 [Z0][LCM][I]: Fail to live migrate VM. Assuming that the VM is still RUNNING (will poll VM).

node002 already running serval VM and has capacity to take this new one. Not sure why it’s failed. Any insight would be appreciated. Thank you.

Ok…After I restart libvirtd on node-002. I can Migrate Live the new VM to node-002 succesfully. However, there is a side effect. All the existing VMs on node-002 are now on PowerOff state even though they are still running. Anyone have any idea on how to make onevm see these VM as running again? I tried hit the Play icon, the log said VM is already running and set the state back to PowerOff.

BTW this is on Openebula 4.10.0.

Hi,

Are the VMs shown in the virsh list command? If so I’d recommend to upgrade to 4.12. We improve the automatically recovery process, and the VMs should be moved automatically to running in your use case.

Cheers