i just have this problem. One of the vm stucked in failed state after upgrade from 4.10 to 5.0.2. When executing onevm recover --delete it does nothing.
It seems to be a problem related with your nfs mount point. Try to
unmount/mount it again, you can remove /one/datastores/0/446/ manually and
try again
I cleaned all those paths. Manually and try to recover vm again. Nothing happened. When run recover --delete it passes with success but vm is still there. When run rocever --recreate eg. it freezes and i have to restart opennebula process. The image connected to vm is marked used_pers so i am unable to clone it. I can copy that image manually on the fs level and create new image from it and run this machine with new template. However that one failed vm will be still there. Is there any possibility to remove that vm directly from database ? we’re running sqlite here.
Finally i solved that by manually editing one.db. I just edit the vm_pool table and change the state column from the value 7 to 6. Then called onedb fsck and start opennebula again. VM is gone and everything is well now !