After a host reboots (be it gracefully or after some failure), virtual machines that were running on that host remain shut down. Is there a way to make them automatically start after the host boots up (without opennebula, I’d just do “virsh autostart vm-name”)? Otherwise the VMs remain off until someone manually starts them…
After a host reboots (be it gracefully or after some failure), virtual
machines that were running on that host remain shut down. Is there a
way to make them automatically start after the host boots up (without
opennebula, I’d just do “virsh autostart vm-name”)? Otherwise the VMs
remain off until someone manually starts them…
That’s more my question in fact. I know where the doc is but still do not understand how you tell OpenNebula to autostart specific VMs, and not one that are POWEROFF for a good reason.
I tried writing a custom hook which is activated when state = active and LCM state = unknown to trigger onevm resume (the states show up in sunstone after host reboot). Its not working. According http://dev.opennebula.org/issues/1639#note-7 the issue should be solved but on the other hand issue 1290 is still open.
I tried setting LIVE_RESCHEDS = 1 in sched.conf - no improvement.
One solution could be modifying one.conf host hook name=error which calls ft/host_error.rb. The present config (in 4.6.2 / ubuntu package on 14.04) does recreate a VM from image/template. I don’t quite understand whats the point behind that in a cluster with shared storage - migrating the vm to some other host would be sufficient and not that destructive.
Is this issue solved in the latest version? I’d really love to migrate from VMware to OpenNebula/KVM
The host fails and the VM needds to be restarted in other host. This is achieved with the HA hook, which hooks on Host ERROR state not in the VM states. After a Host goes to error the VMs on that host would be in UNKNOWN, and could be “restarted”. Note that the original host is down and there is no hope to contact the hypervisor in that host so no live migration… cold migration can work using a shared system datastore.
The host fails no recovery action needs to be taken, but once the host reboots the VMs running on the host need to be restarted. This can be achieved by configuring the hypervisor (e.g. on_failure attribute in Xen and so on). This can be triggered from opennebula, again hooking on the host states. For example on ON, get the VMs in the host (e.g. onehost show) check if they are in unknown and then resume them in the same host.
but there ist no file created in the shared nfs directory /var/lib/one/log. In the documentation for host hooks only the states CREATE, ERROR, DISABLE are mentioned.
My bad, you are right I mixed the VM and Host triggers
So, if the host fails and no recovery action needs to be taken, but once
the host reboots the VMs running on the host need to be restarted; you
should resort to the hypervisor capabilities, (for example add virsh
autostart in de deploy script should be straight forward)
Since I was unable to find a solution to this and even with OpenNebula 5.5.90 I see that there is no option for autostart virtual machines in case of a opennebula node/server restart/reboot or power failure or whatever, I came up with this… Probably not the best way to achieve but as long as it works is just fine.
So I have OpenNebula 5.5.90 installed on a CentOS 7.x server. I have added this piece of code to the
if [ -z “$IS_MOUNTED” ]
then
echo “Storage partition is not mounted on the server(s)”
else
echo “Storage partition is mounted on the server(s)”
for i in $ALWAYS_ON_KVMS; do onevm resume $i; done
fi
You might not need that 60 seconds delay, in my case is necessary since there 2 opennebula servers which share the same partition which is in fact a DRBD resource and it’s mounted by pacemaker/corosync
The only issue with this solution is that, if you create more vms, then you have to update the file with their ids.
Hope it helps!
Note: Pay attention to quotes etc since the forum seems to be reformating the text and some special chars like ` " are not displayed correctly