Hello,
Is it possible to enforce datastore capacity checks for disk resizes or, alternatively, thick provision qcow2 or raw disks with Local Datastores?
The problem I’m trying to solve is datastore over-commitment. These are my steps:
- Create a 10GB qcow2 disk in an image datastore
- Provision a VM
- Attach said 10GB qcow2 disk to the VM
- Resize the 10GB qcow2 disk to 500GB, far exceeding the available disk space in my Local datastore
Now, I would’ve hoped OpenNebula would not allow over-commitment as DATASTORE_CAPACITY_CHECK=yes
but I see this working only on initial provisioning.
I’ve tested this with raw images as well and the OS ‘sees’ and shows the virtual disk size as 500GB, however, df
still only computes the actual disk usage.
This is a problem as any number of users might create small VMs and later on resize disks; this causes VMs get into suspended states as there’s no available disk on the underlying datastore.
So, my question is: is it possible to disable disk over-commitment on Local Datastores? Either raw or qcow2 would be fine.
My setup is OpenNebula CE 6.4.0.1, KVM hypervisor with (predictable) Local Datastores.
Thanks!