VM "running" on an Offline node

Dear all,

I am using version 5.6.0 and I faced something surprising during a test

I have a very small lab with 2 hosts. I have VMs on both hosts. I played with Host status and a VM can “RUN” even if the host is “OFFLINE”.
This is quite surprising.

More details below

Thank you for your help

Jean-Philippe


Versions of the related components and OS (frontend, hypervisors, VMs):
Front-End, Hypervisors running OpenNebula 5.6.0
OS: Centos 7
VMs: ubuntu 18.04 image from OpenNebula MarketPlace

Steps to reproduce:

Here is my test:

  1. Turn off all VMs running on host 1
  2. change status of host 1 to “disable”
  3. try to start a VM previously running on host 1: it works, VM is with status “RUNNING” on host 1
  4. Power OFF the VM
  5. Change status of host 1 to “offline”
  6. Same as 3., start the same VM: It works too: VM is “RUNNING” on host 1.

Current results:

VM is RUNNING when host is DISABLED or OFFLINE

Expected results:

VM must not RUN on an OFFLINE (it can run on a disable node under certain circumstances)

1 Like

Hi, I personally don’t use offline, becase last time I used it in early 5.x releases, it shutdown whole node (it is good, but not for me, so I dont use it). Disable I use more offten also with running VMs, it prevents scheduler use that node for deployment of new VMs.

So it is absolutely OK have this behaviour with disabled state and when you put nod eto offline staten it should shutdown node.

Hi Kristian,

Thank you for your answer.
When I disable a host, if VM are powered off on this node, when I turn on them I expect there are relocated automatically on an other available node with same characteristics if possible or it fails because no resources are available.
Here it starts on the same node while it is disabled. For me disable means, don’t start anything but let VM already on running.

And about Offline node, when the node become offline, any VMs still on it should be relocated before starting or it must fail.

For your case, I think if you set the status to “offline” while VMs are running it should raise an alert saying VMs are still running on it and do nothing (and not turning off all VMs like it seemed to be the case for you).

So how to put into “maintenance” a host without shutting down all VMs and without doing “manual” operation to relocate VMs ?

Hi hello, I understand you, but there is no automatic way to to this. You can in VM list search all VM running on particular node and call reschedule action, which migrates VM to other available nodes (live migration is also supported).

My way is: disable host, reschedule VMs, do maintenance, enable host.

We have API, so this can be done by API calls and can be done automatic, but not from Sunstone

Yes sure, I will use API for this kind of operations. I will try this.

But If the host turn off for any reason, what happens for VM running on it are they relocated ? Or shoud we wait for host availability again ?

What we talking about, is new feature, so you can fill new feature request on github.

Yes I understand this, I will open a new feature request

… or use onehost flush $hostid from FE’s commandline to do the job :wink:

You’ll need to reconfigure the opennebula scheduler setting /etc/one/sched.conf LIVE_RESCHEDS = 1 so the scheduler will live-migrate the nodes when flushing a host.

My 2c…

Best,
Anton Todorov

2 Likes

So there can be feature request to just add “flush” button to sunstone :slight_smile:

I tried “flush” option too but it did nothing. Maybe because I didn’t check sched.conf
Howerver I didn’t had any log or error saying that I try something not activated.

I will also take a look on this.

Thank you @atodorov_storpool and @feldsam for your help

When scheduler can not reschedule VM, error appears in VM detail. Check it

I agree this is inside the VM details. However, it could be nice to have a message like Something went wrong, please check VM details when you run a cli command. For the moment there is nothing, in that case it means this is OK.

But this is not related to this thread ;).