VM stuck in LCM_INIT state

Hi,
I got some problems with my opennebula system. It is working pretty fine until I do not deploy more than 3 VMs. If I boot up a 4th VM it is stuck in the LCM_INIT (Pending) state. I would guess there is a problem with the datastores but I was not able to fix this yet. The sched.log says:

Wed Jun 3 14:32:58 2015 [VM][I]: Dispatching VM 157 to host 4 and datastore 0
Mon Jun 15 14:14:17 2015 [VM][D]: Pending/rescheduling VM and capacity requirements:
VM CPU Memory System DS Image DS

 158   15      524288       11264  DS 100: 0

Mon Jun 15 14:14:17 2015 [HOST][D]: Discovered Hosts (enabled):
4
Mon Jun 15 14:14:17 2015 [SCHED][D]: VM 158: Datastore 0 filtered out. Not enough capacity.
Mon Jun 15 14:14:17 2015 [SCHED][I]: Scheduling Results:
Virtual Machine: 158

    PRI     ID - HOSTS
    ------------------------
    -1      4
    PRI     ID - DATASTORES
    ------------------------
    0       101

I am using Opennebula 4.4.0. Can someone explain to me what is wrong here? The mentioned datastore ist pretty full, but there are only running 3 VMs with the Ubuntu Server 12.04 from the marketplace which takes 11GB of space.

Hi,

what could be the case is that the amount of space is not in use (real world) but all available space is assigned already.
So lets say your datastore can hold 100 GB, and you have 3 images, all 40 GB in size. Because they are not filled up yet, they are approx 3 GB in size and will grow to max 40 if needed. But if you deploy 2 VMs with these images, opennebula sees it like a reservation of 80 GB, so deploying a third VM will have as a result that 120 GB would be needed, so it wont allow you to start it.
Can you check if the reserved max size for a disk is the problem ? (so not the real-world size)

1 Like

Thanks for your response. Actually what you said is pretty much what I also thought from this error. I read that non persistent images take additional disk space, some kind of a logical volume, which is reserved for the machine. Basically that is what you were telling me before. But how can I check for the reserved size of a disk? I don’t know where I can look it up!?

no persistency is something else completely - if you want to boot the perfect image 10 times from 1 image you use non-persistency. That means that a disk can be written to, it can be rebooted and powered off, but if you destroy the VM, your original booting image will not be changed (non-persistent). But if you want to use a VM and be able to change the original image and keep the changes no matter what, you use persistent images.

Using qcow2 to create an image, you define that its 20 GB in size, but if no OS is installed, it will take up <1 MB on your disk. After installing the OS, it will become around 1.5 GB, and will grow if you add files, to a max of 20 GB.

Compare the bottom value of storage in Sunstone you click on the overview for “Images” under “virtual resources” and the real-world size of your “datastores”, under “capacity”. In my case, all images shows 320 GB, and when I check my datastores, it has a total of 178 GB in use. I’m not 100% sure the difference in size is caused by just the use of growing images, so anyone, plz correct me if I’m talking out of my *** :slight_smile:

1 Like

That’s totally accurate :slight_smile:

Note that we use virtual image sizes for images to compute quotas. When you
create an image of 40GB you are actually making a storage reservation of
40GB.

Probably next release will include also information for real size…

Thank you once again for your thoughts. My system datastore has an overall capacity of 100GB. When I looked it up I had 3 VMs running and 96GB of disk space were used. When I deleted the VMs,there still remain 63GB in use. That got me really confused. First of all a VM takes 11 GB space which i looked up following your instructions. So the space needed to deploy a VM is 11 GB as mentioned (96GB-3x11G=63GB). The confusing part of this is the remaining used disk space…Is this normal or can I clean the datastore up in some way?

Is there is something else stealing your space; I assume you’re using a local disk as datastore ?
If you fill up your partition with “normal” files (no virtual disks), that will also eat up the available space you see in your datastore, maybe thats what happened ?

Yes I am using a local disk as as a datastore. But it’s not full yet. If I df -h I get:

Filesystem Size Used Avail Use% Mounted on
/dev/sda1 217G 122G 85G 60% /
udev 7.8G 4.0K 7.8G 1% /dev
tmpfs 3.2G 844K 3.2G 1% /run
none 5.0M 4.0K 5.0M 1% /run/lock
none 7.8G 152K 7.8G 1% /run/shm

As you can see there are 85G available. When I check my datastore via onedatastore show. I get the following information:

oneadmin@FrontEnd:~$ onedatastore show 0
DATASTORE 0 INFORMATION
ID : 0
NAME : system
USER : oneadmin
GROUP : oneadmin
CLUSTER : -
TYPE : SYSTEM
DS_MAD : -
TM_MAD : shared
BASE PATH : /var/lib/one//datastores/0
DISK_TYPE : FILE

DATASTORE CAPACITY
TOTAL: : 103G
FREE: : 39.3G
USED: : 55.2G
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
SHARED=“YES”
TM_MAD=“shared”
TYPE=“SYSTEM_DS”

The mentioned path (/var/lib/one/datastores/0) is completely empty. There are no hidden files. Is there something wrong with my setup?
Highly apreciating your support

EDIT: Checked the wrong machine =( I checked my frontend first. When I checked the host machine afterwards there were files in the datastorepath. I guess there is a problem with the shared option because normally the folder should be the same on both machines. Anyway thanks for your support!