CPU quotas and stopped/undeployed VMs [Solved]

Hi all,

I set up a CPU quota for each group in my installation.
I realised that when a VM is stopped or undeployed (that is, the resources of the worker node are actually freed), the number of CPUs specified in the template are not deducted from the quota’s counter. In fact a user/group can use up the entire quota only with stopped/undeployed VMs. Is it possible to change that? Where is this feature coded? Is it in the ONE core or in a ruby script?


Hi Matteo,

Currently is not possible to change that behavior, the logic to track
quotas lives in oned. The rationale behind this is to keep quota tracking
simple, when a VM enters the system all the resources are accounted for the
VM. CPU/Memory could be deducted from STOPPED/UNDEPLOY (or even HOLD)
states but not IP’s, disk images and so on…

We are planning to integrate generic quotas and probably we will follow the
same path. Setting which states should increase/decrease each quota would
be more flexible but probably not very useful…

Thanks for your feedback!



Hi Ruben,

fair enough, thanks for the explanation. I can understand the rationale behind it.


Hello, is anything new with this topic?

One of the intended use-cases for my OpenNebula cloud is to have preinstalled VMs for various operating systems (in my previous virtualization system, I had for example the last three versions of CentOS, about 10 versions of Fedora, FreeBSD, etc.). Those VMs will be powered down most of the time, and will get booted only when I want to test something on a particular version of a given distribution. I would like the powered down VMs to be accounted as consuming no memory, no CPU, no volatile disk space, etc.

I can surely use them as templates, and instantiate them as needed, but templates do not have persistent IP addresses and MAC addresses. I want to have those VMs in /etc/hosts (or DNS), and to be able to boot the VM and run “ssh fedora18.virt.example.com”, without adjusting the IP address/name/ssh_known_hosts every time the VM is instantiated.


In this case, (you want to reserve addresses for a set of well-known VMs):

1.- Create a VNet reservation, you can ask for specific IPs
2.- The reservation is similar to a private namespace, no one can get IPs
from there
3.- Update your well-known VMs to point to the reservation and set the IP
you want for them
4.- Let context configure everything else the VM is booted (DNS name, nic

Note that this way you’ll have reproducible environments, always booted
from the same state.


OK, this works, thanks!

A minor issue is that the starting address of the leases range is given as IPv4 even for dual-stack (6+4) VNETs, but as far as I can tell the corresponding IPv6 address is calculated exactly as I want it to be, so no problem on my side.