I’m trying to figure out how to start VMs after controller host failure.
I have a number of OpenNebula nodes in HA cluster: controller plus KVM host. In case of passive node failure VMs are starting well but if controller node fails (unexpected reboot, for example) - VMs were running on this host don’t start\reschedule. I was trying to set VM_HOOK like
VM_HOOK = [
name = “hook_vm_on_unknown”,
on = “CUSTOM”,
state = “ACTIVE”,
lcm_state = “UNKNOWN”,
command = “ft/hook_vm_on_unknown.sh”,
arguments = “$ID $PREV_STATE $PREV_LCM_STATE”,
but don’t know which statuses should I set for the hook.
In logs I can see
Sun Jul 17 10:14:22 2016 [Z0][LCM][I]: VM running but monitor state is POWEROFF
Sun Jul 17 10:14:22 2016 [Z0][VM][I]: New LCM state is SHUTDOWN_POWEROFF
Sun Jul 17 10:14:22 2016 [Z0][VM][I]: New state is POWEROFF
Sun Jul 17 10:14:22 2016 [Z0][VM][I]: New LCM state is LCM_INIT
Is there any way to start only VMs which were running before fail? As well will VMs start if controller will start on another node?
Thanks and best regards,