Move SHUTDOWN VMs from hosts in error

Hello,

Using ONE 4.10 with shared datastores.

I have two VMs in SHUTDOWN on a host named nebula1.

This host is down and I want the VMs to boot on another host.

I tried ./remotes/hooks/ft/host_error.rb 0 --recreate but it does not seems to work for VMs in SHUTDOWN.

Any hints?

Regards.

You can make those VMs progress with onevm recover actions, to flag the
shutdown operation either to fail or success. This will transition the VM
to the final state, DONE, FAILED etc…

Cheers

“Ruben S. Montero” forum@opennebula.org writes:

You can make those VMs progress with onevm recover actions, to flag the
shutdown operation either to fail or success. This will transition the VM
to the final state, DONE, FAILED etc…

I tried but I got a “Wrong state to perform action” Error :-/

Tomorrow I’ll attach logs for this.

Regards.

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF

I’d also be interested in the VM states, onevm show -x would help

“Ruben S. Montero” forum@opennebula.org writes:

I’d also be interested in the VM states, onevm show -x would help

Here are the logs for VM 724

Thu Mar 19 08:36:30 2015 [Z0][ReM][D]: Req:6672 UID:0 VirtualMachineRecover invoked , 724, true
Thu Mar 19 08:36:30 2015 [Z0][ReM][E]: Req:6672 UID:0 VirtualMachineRecover result FAILURE [VirtualMachineRecover] Wrong state to perform action

Here is the state of the VM 724

<VM>
    <ID>724</ID>
    [...]
    <LAST_POLL>1426670854</LAST_POLL>
    <STATE>8</STATE>
    <LCM_STATE>0</LCM_STATE>
    <RESCHED>0</RESCHED>
    <STIME>1426589313</STIME>
    <ETIME>0</ETIME>
    <DEPLOY_ID>one-724</DEPLOY_ID>
    <MEMORY>0</MEMORY>
    <CPU>0</CPU>
    <NET_TX>789521</NET_TX>
    <NET_RX>14216909</NET_RX>
    [...]
</VM>

The other VM

<VM>
    <ID>52</ID>
    [...]
    <LAST_POLL>0</LAST_POLL>
    <STATE>8</STATE>
    <LCM_STATE>0</LCM_STATE>
    <RESCHED>0</RESCHED>
    <STIME>1421762562</STIME>
    <ETIME>0</ETIME>
    <DEPLOY_ID>one-52</DEPLOY_ID>
    <MEMORY>0</MEMORY>
    <CPU>0</CPU>
    <NET_TX>257270817</NET_TX>
    <NET_RX>10132972350</NET_RX>
    [...]
</VM>

The host is now up, so my problem will be solved soon, but if you want
more informations I will not touch the VM 724.

Regards.

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF

Hello,

Any idea for that issue?

Should I open a ticket?

Regards.

Hi Daniel,

I missed you last post :S Anyway, the VMs are in poweroff, they can only be migrated if they get back to running.

We have this issue to address this:

http://dev.opennebula.org/issues/3654

Cheers