I am using version 5.6.0 and I faced something surprising during a test
I have a very small lab with 2 hosts. I have VMs on both hosts. I played with Host status and a VM can “RUN” even if the host is “OFFLINE”.
This is quite surprising.
More details below
Thank you for your help
Jean-Philippe
Versions of the related components and OS (frontend, hypervisors, VMs):
Front-End, Hypervisors running OpenNebula 5.6.0
OS: Centos 7
VMs: ubuntu 18.04 image from OpenNebula MarketPlace
Steps to reproduce:
Here is my test:
Turn off all VMs running on host 1
change status of host 1 to “disable”
try to start a VM previously running on host 1: it works, VM is with status “RUNNING” on host 1
Power OFF the VM
Change status of host 1 to “offline”
Same as 3., start the same VM: It works too: VM is “RUNNING” on host 1.
Current results:
VM is RUNNING when host is DISABLED or OFFLINE
Expected results:
VM must not RUN on an OFFLINE (it can run on a disable node under certain circumstances)
Hi, I personally don’t use offline, becase last time I used it in early 5.x releases, it shutdown whole node (it is good, but not for me, so I dont use it). Disable I use more offten also with running VMs, it prevents scheduler use that node for deployment of new VMs.
So it is absolutely OK have this behaviour with disabled state and when you put nod eto offline staten it should shutdown node.
Thank you for your answer.
When I disable a host, if VM are powered off on this node, when I turn on them I expect there are relocated automatically on an other available node with same characteristics if possible or it fails because no resources are available.
Here it starts on the same node while it is disabled. For me disable means, don’t start anything but let VM already on running.
And about Offline node, when the node become offline, any VMs still on it should be relocated before starting or it must fail.
For your case, I think if you set the status to “offline” while VMs are running it should raise an alert saying VMs are still running on it and do nothing (and not turning off all VMs like it seemed to be the case for you).
So how to put into “maintenance” a host without shutting down all VMs and without doing “manual” operation to relocate VMs ?
Hi hello, I understand you, but there is no automatic way to to this. You can in VM list search all VM running on particular node and call reschedule action, which migrates VM to other available nodes (live migration is also supported).
My way is: disable host, reschedule VMs, do maintenance, enable host.
We have API, so this can be done by API calls and can be done automatic, but not from Sunstone
… or use onehost flush $hostid from FE’s commandline to do the job
You’ll need to reconfigure the opennebula scheduler setting /etc/one/sched.confLIVE_RESCHEDS = 1 so the scheduler will live-migrate the nodes when flushing a host.
I tried “flush” option too but it did nothing. Maybe because I didn’t check sched.conf
Howerver I didn’t had any log or error saying that I try something not activated.
I agree this is inside the VM details. However, it could be nice to have a message like Something went wrong, please check VM details when you run a cli command. For the moment there is nothing, in that case it means this is OK.