VM stuck in PENDING state

I have installed the latest OpenNebula 14.4.2 on Debian Jessie in order to play around with the LXCoNe driver.

I have followed the instructions found in the LXCoNe Installation & Configuration Guide to build my playground.

In a couple of hours I had a 2 hosts (frontend and node) environment up and running. One image in raw
format stored in DS 1, one template with CONTEXT[NETWORK=YES], a couple of custom CONTEXT[VAR=VAL] variables, and network connected to (a custom vnet) br0.

Once I instantiate the template, the VM remains in PENDING state forever, if I would have the patience to wait for that long.

$ onetemplate instantiate 1 --name djn
38
$ onevm list
    ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
    37 oneadmin oneadmin djx             runn    0      0K yoda         3d 16h00
    38 oneadmin oneadmin djn             pend    0      0K  yoda       0d 00h38

I have configured the scheduler with LOG[debug_level=5] to inspect the situation further.

$ cat /var/log/one/sched.log
Fri Dec  4 09:40:57 2015 [Z0][SCHED][D]: Getting scheduled actions information. Total time: 0.00s
Fri Dec  4 09:40:58 2015 [Z0][VM][D]: Pending/rescheduling VM and capacity requirements:
  ACTION       VM  CPU      Memory PCI   System DS  Image DS
------------------------------------------------------------
  DEPLOY       38   20      262144   0       13313  DS 1: 0
Fri Dec  4 09:40:58 2015 [Z0][HOST][D]: Discovered Hosts (enabled):
ID          : 0
CLUSTER_ID  : -1
MEM_USAGE   : 0
CPU_USAGE   : 0
MAX_MEM     : 2058684
MAX_CPU     : 100
FREE_DISK   : 0
RUNNING_VMS : 0
PUBLIC      : 0

 DSID         FREE_MB
------------------------------

    PCI ADDRESS    CLASS   VENDOR   DEVICE     VMID
-------------------------------------------------------

ID          : 2
CLUSTER_ID  : -1
MEM_USAGE   : 262144
CPU_USAGE   : 20
MAX_MEM     : 2058684
MAX_CPU     : 100
FREE_DISK   : 0
RUNNING_VMS : 1
PUBLIC      : 0

 DSID         FREE_MB
------------------------------

    PCI ADDRESS    CLASS   VENDOR   DEVICE     VMID
-------------------------------------------------------


Fri Dec  4 09:40:58 2015 [Z0][SCHED][D]: Getting VM and Host information. Total time: 0.01s
Fri Dec  4 09:40:58 2015 [Z0][SCHED][D]: Match Making statistics:
        Number of VMs:          1
        Total time:             0s
        Total Match time:       0.00s
        Total Ranking time:     0.00s
Fri Dec  4 09:40:58 2015 [Z0][SCHED][D]: Scheduling Results:
Virtual Machine: 38

        PRI     ID - HOSTS
        ------------------------
        0       0
        -1      2

        PRI     ID - DATASTORES
        ------------------------
        0       0



Fri Dec  4 09:40:58 2015 [Z0][SCHED][D]: Dispatching VMs to hosts:
        VMID    Host    System DS
        -------------------------

Fri Dec  4 09:40:58 2015 [Z0][SCHED][D]: Dispatching VMs to hosts. Total time: 0.00s

The VM deploy just fine with onevm deploy.

$ onevm deploy 38 2

After a couple of seconds.

$ onevm list

        ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
        37 oneadmin oneadmin djx             runn    0      0K yoda         3d 16h00
        38 oneadmin oneadmin djn             runn    0      0K yoda         0d 00h38

$ lxc-attach -n one-38 -- ls -al /tmp/one_env
-rw-r--r-- 1 root root 3878 Dec  4 09:51 /tmp/one_env

Why does this happen? Does it relate to LXC only? That I will find out soon, I haven’t got the chance to test the KVM driver yet with 4.14.2.

Thank you.

Following my LXCoNe quest I have came to the conclusion that the problem is somehow related to the LXC drivers.

I have installed a KVM node, insert it in OpenNebula, created a template and instantiate it. The scheduler deployed the VM right away.

I thought it would work if I add HYPERVISOR to the template. I thought wrong, the scheduler ignores that variable, maybe it is just for display purposes.

Adding SCHED_REQUIREMENTS = "HYPERVISOR=\"lxc\"" to the LXC templates correctly deploys the VM.

Is this the normal behaviour? Thank you.