Qcow2 sparse files unexpectedly expanding to full size

I noticed when creating a new image using an existing qcow2 file, the image (which is a sparse file originally and only takes up a fraction of its maximum size) is expanded to fill the total maximum size of the filesystem inside the qcow2 image (which in this case is 60GB). It takes about 11 minutes to add the image, which surprised me since the frontend node has 8GB of RAM and 4 cores. Deploying a VM using this image also takes a long time. The file I am using to create the new image:

root@on-fe:/var/tmp# du -h 64GBase-1P-new.qcow2
2.1G 64GBase-1P-new.qcow2

Then after the file is added to the image datastore:

root@on-fe:/var/lib/one/datastores/102# du -h *
61G e2459bab2b727ec9385c6b349c56b817

When I run vanilla libvirt in a non-opennebula hypervisor and build manual .xml files to define a vm, and attach a sparse qcow2 file as the disk image, it doesn’t do this, the image file only grows as data is added to the filesystem as expected. Why is this happening? Here are the details of the image datastore:

root@on-fe:/var/tmp# onedatastore show 102
DATASTORE 102 INFORMATION
ID : 102
NAME : Test-qcow2mode
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : qcow2
BASE PATH : /var/lib/one//datastores/102
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 194.9G
FREE: : 103.4G
USED: : 81.5G
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS=“NO”
CLONE_TARGET=“SYSTEM”
CLONE_TARGET_SSH=“SYSTEM”
DISK_TYPE=“FILE”
DISK_TYPE_SSH=“FILE”
DRIVER=“qcow2”
DS_MAD=“fs”
LN_TARGET=“NONE”
LN_TARGET_SSH=“SYSTEM”
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
TM_MAD=“qcow2”
TM_MAD_SYSTEM=“ssh”
TYPE=“IMAGE_DS”

IMAGES
7

This datastore (as well as the system datastore) is sitting on an NFS share so that it’ll be usable for live migration of VMs. The system datastore I’m using is this:

root@on-fe:/var/tmp# onedatastore show 100
DATASTORE 100 INFORMATION
ID : 100
NAME : NEW_system
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : SYSTEM
DS_MAD : -
TM_MAD : shared
BASE PATH : /var/lib/one//datastores/100
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 194.9G
FREE: : 129.1G
USED: : 55.8G
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS=“NO”
DISK_TYPE=“FILE”
DS_MIGRATE=“YES”
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED=“YES”
TM_MAD=“shared”
TYPE=“SYSTEM_DS”

IMAGES

The goal is to thin provision my images - I want to be able to create a qcow with large maximum filesystem sizes and only pay for the storage I actually use inside those images. Is this possible in opennebula?

Tom

Hi @tomammon,

You can compress qcow2 formatted images with: qemu-img convert -O qcow2 -c <FILENAME <OUTPUT_FILENAME>.

Cheers.

Also, the upcoming OpenNebula 6.0 will include this improvement.

Hello @tomammon, there is something wrong with configuration, because it is not normal. I was using qcow2 shared DS in past, and it was working as thin volumes.

From what I understant, you are uploading your thin base image via frontend? Did you select image type == qcow2? it looks like you have select none or raw