SCHED_DS_REQUIREMENTS does not work

Hi,

On this doc is explained how to set the correct Datastore for a template or VM:

https://docs.opennebula.org/5.2/operation/references/template.html

Nevertheless, It looks not work properly. I have set:

SCHED_DS_REQUIREMENTS = "NAME=vidc-vm"
and
SCHED_DS_REQUIREMENTS = “NAME=“vidc-vm””

[oneadmin@vdicone01 ~]$ onedatastore list
  ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
   0 system                 - -     0                 0 sys  -       ssh     on
   1 default             6.7G 26%   0                 0 img  fs      ssh     on
   2 files               6.7G 26%   0                 0 fil  fs      ssh     on
 107 vdic-vm                - -     0,100             0 sys  -       ssh     on
 108 vdic-core           6.7G 26%   0,100             1 img  fs      shared  on

But on logs appears:

> Sun Dec  4 18:58:44 2016 [Z0][SCHED][D]: Getting VM and Host information. Total time: 0.02s
> Sun Dec  4 18:58:44 2016 [Z0][SCHED][D]: Host 1 discarded for VM 28. It does not fulfill SCHED_REQUIREMENTS.
> Sun Dec  4 18:58:44 2016 [Z0][SCHED][D]: Match-making results for VM 28:
>         Cannot schedule VM, there is no suitable host.

If I comment the SCHED_DS_REQUIREMENTS it looks work better.

Any help will be welcome,

hi Oscar,

I think this is because the vdic-vm datastore is a “system” type datastore.
That’s where ONE places symlinks to or copies of images that are in use by VM’s, if you would change vdic-vm into vdic-core (which is type IMG), it should work.

Hi Roland

Thanks for the clarification but I want to be able to migrate the VM between my hosts therefore they may run on a shared System Datastore ¿Isn’t it?

Thanks a lot.

just checking - the original system datastore and the host are member of the same cluster,
but is the new datastore available to that cluster as well ?

Morning oscar!
As Roland suggests, could you run a onehost list and a oneimage list command so we can confirm why the Scheduler has discarded Host 1?

Cheers!

Of course I can:

[oneadmin@vdicone01 ~]$ oneimage list
  ID USER       GROUP      NAME            DATASTORE     SIZE TYPE PER STAT RVMS
  10 oneadmin   oneadmin   vdicimage       vdic-core       8G OS   Yes used    1
[oneadmin@vdicone01 ~]$ onehost list
  ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   1 vdicone01       VDICube     1     50 / 400 (12%)   512M / 5.7G (8%) on
[oneadmin@vdicone01 ~]$

** You will be able to see status=used because I have commented the SCHED_DS_REQUIREMENTS.

Thanks a lot!

Hi, anybody has experienced the same issue?

thanks a lot.

Hi!
Just for clarification if any user visits this post, if you want to live migrate a VM you would need the shared Transfer Method and a distributed FS like NFS, Ceph, Lustre or Gluster as stated here. If you don’t need live migration you can use the SSH transfer method, where VMs can be migrated powering of the VM in one node and then resumed automatically in a different node.

Cheers!

Hi,

This is my current configuration:

[root@vdicone01 ~]# onedatastore list
  ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
   0 system                 - -     0                 0 sys  -       ssh     on
   1 default             6.7G 23%   0                 0 img  fs      ssh     on
   2 files               6.7G 23%   0                 0 fil  fs      ssh     on
 117 vdic-core            40G 95%   0,100             1 img  fs      shared  on
 118 vdic-core-vm        100G 100%  0,100             0 sys  -       shared  on

117 and 118 are gluster datastores.

I get exactly the same error regarding SCHED_DS_REQUIREMENTS.

Thanks a lot.

Hi oscar!
I’m reviewing your first post. If you set SCHED_DS_REQUIREMENTS = “NAME=vidc-vm” the VM is not deployed because “It does not fulfill SCHED_REQUIREMENTS”.

The funny thing is that it complains about SCHED_REQUIREMENTS not SCHED_DS_REQUIREMENTS which is a different attribute!!!

As explained here, host1 is discarded because of SCHED_REQUIREMENTS, which means that “Those hosts that do not meet the VM requirements (see the SCHED_REQUIREMENTS attribute) or do not have enough resources (available CPU and memory) to run the VM are filtered out”

If it was a problem with datastores, the message accordint to source code would be something like “It does not fulfill SCHED_DS_REQUIREMENTS.”, with DS.

Sorry, I’ve to ask, are you sure that after doing some tests the template is using SCHED_DS_REQUIREMENTS=“NAME=vidc-vm” and not SCHED_REQUIREMENTS=“NAME=vidc-vm”? If using SCHED_REQUIREMENTS that host will be discarded as it may be trying to deploy the VM in a host with NAME=vidc-vm, which may force the scheduler to say no!

Cheers!

Hi,

this is my vm template:

[oneadmin@vdicone01 ~]$ cat vdic-vdicdb01.templ
NAME   = vdicdb01
CPU    = 0.5
MEMORY = 512
VCPU = 2

DISK = [ IMAGE   = "vdicimage" ]

NIC    = [ NETWORK = "blue", IP = 192.168.2.3]
NIC    = [ NETWORK = "interconnect", IP = 192.168.100.3]

NIC_DEFAULT = [ MODEL = "virtio" ]

#SCHED_DS_REQUIREMENTS = "NAME=\"vdic-core\""

GRAPHICS = [
  KEYMAP  = es,
  type    = spice,
  listen  = 0.0.0.0]
[oneadmin@vdicone01 ~]$

I have tested with:
“NAME=“vdic-core””
“NAME=vdic-core”
“NAME=“vdic-core-vm””
“NAME=vdic-core-vm”

Hi!
sorry to ask, that was my bet as the names are so similar :frowning:

And when trying to instantiate that template you still get the same “It does not fulfill SCHED_REQUIREMENTS”? I can’t understand how the SCHED_DS_REQUIREMENTS can produce a SCHED_REQUIREMENTS error message,

Cheers!

Hi,

Forget it, in my last test looks hast worked perfectly.

Sorry for the inconvenient.

Thanks a lot for your help.

Awesome!
did you change anything for the records?

I have done tons of tests and now I cannot remember what changes have I applied.

thanks a lot for your interest!