Issued error "Not enough space in datastore" although there actually is enough space

Hi dear forum members,

even though I have enough space left in the image datastore, OpenNebula keeps telling me, that there is not enough space left. Please see the attached picture that says that I have enough space:

I go to ‘Storage > Images > +’ and type in the path of the qcow2-file in the opennebula server (i.e. the frontend) in order to create an image. (I’ve also already tried an upload and the command line tool oneimage to create the image)

I am using CentOS in the frontend which itself runs on a VMware-VM on my Windows desktop.

Thank you

Hi,
you could check in /var/log/one/oned.log it there’s more info on that error. Have you tried to create from Storage -> Image a Generic storage datablock selecting Empty disk image and for example 10 MB just to test that image datastore runs fine?

Cheers!

1 Like

Hey there,

unfortnunately /var/log/one/oned.log is giving me no further information. I’ve followed your tip with the empty disk imageand additionally tried to import an image of an existing plain file.into the file datastore. That worked. Both were like 2 GB large, which is larger than my intended os image. This is really weird. Could OpenNebula or KVM have a problem with the fact, that I am trying to run a VM inside another VM. Im mean, I am not running the frontend neither the host on a dedicated physical machine…

thx

Hi!
I’d say that there’s no problem on running the frontend in a VM that’s quite common indeed.

So when you try to upload a qcow2 file to the image datastore, OpenNebula complains about not enough space. What size is the qcow2 file you’re uploading. What size is the filesystem in the qcow2 file, I mean, as it’s a qcow the file size will be smaller than the real filesystem inside e.g several gigs.

I’m only guessing sorry, it could be that the qcow2 is big enough that it cannot be stored in a temp file when uploading or any other check. We’ll manage to find what’s going on.

Cheers!

Hi,

we ha faced a similar problem: our images datastore shows 14Gb free space and we could not upload a 3.3Gb image. The disk had those 14G of space:

Some logs:
Image info. The image weights 3.1G (although the disk defined is 30G)

qemu-img info myImage.qcow2
image: myImage.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 3.1G
cluster_size: 65536
Format specific information:
compat: 0.10

Datastore info

onedatastore show 1
DATASTORE 1 INFORMATION
ID : 1
NAME : default
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : ssh
BASE PATH : /var/lib/one//datastores/1
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 43.6G
FREE: : 14.7G
USED: : 28.9G
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
CLONE_TARGET="SYSTEM"
DISK_TYPE="FILE"
DS_MAD="fs"
LN_TARGET=“SYSTEM"
RESTRICTED_DIRS=”/“
SAFE_DIRS=”/sunstone_uploads"
TM_MAD="ssh"
TYPE=“IMAGE_DS”

Oned.log

Mon Jan 23 15:12:50 2017 [Z0][ReM][D]: Req:4128 UID:0 ImageAllocate invoked, "NAME=“myImage…”, 1
Mon Jan 23 15:12:50 2017 [Z0][ReM][E]: Req:4128 UID:0 ImageAllocate result
FAILURE [ImageAllocate] Not enough space in datastore

Which size is ONE checking to calculate the required space, the actual image size and not the disk space, right?

We suspect the problem is related to Opennebula checking the Virtual size instead the actual disk size of the image.

Could someone, please, confirm which size in the image is checked when the space availabilty is checked?

In the other hand, is there any way (setting some attribute to the datastore or something) we could disable the size checking of the images when updloading them?

Thanks!

Answering myself :slight_smile:

This is the ‘problematic’ line in our case:

When checking the required space in the datastore (of type FS), the datastore_MAD for the FS type is executing this to check the size:

SIZE=$($QEMU_IMG info “$1” | sed -n ‘s/.(([0-9]) bytes).*/\1/p’)

So the data is read from the virtual disk size instead of the actual disk size (I pasted the output of qemu-img info in this thread).

Is there any reason a qcow image should be checked the virtual disk size instead the actual size when uploading??

We fixed the issue changing that line for SIZE=$(file_size “$1”).

Shouldn’t be the qcow case treated equally as the default one?

All I’ve seen this issue before and I came to find out that it was actually a VMWare issue. Please run the following command on your Datastore on an ESXi host.

vmkfstools -P -v 10 /vmfs/volumes/vm_datastore_1

Below is an example of one of my datastores when I was running into the same issue:

VMFS-5.60 file system spanning 1 partitions.
File system label (if any): vm_datastore_1
Mode: public ATS-only
Capacity 10994847842304 (10485504 file blocks * 1048576), 3929556385792 (3747517 blocks) avail, max file size     69201586814976
Volume Creation Time: Sat May 28 07:06:50 2016
Files (max/free): 130000/125655
Ptr Blocks (max/free): 64512/36
Sub Blocks (max/free): 32000/30933
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/6737987/0
Ptr Blocks  (overcommit/used/overcommit %): 0/64476/0
Sub Blocks  (overcommit/used/overcommit %): 0/1067/0
Volume Metadata size: 857374720
UUID: 5749438a-e0401ea0-8599-ecf4bbc519f8
Logical device: 5749437c-41a3be96-de2b-ecf4bbc519f8
Partitions spanned (on "lvm"):
    naa.624a9370a2aedf261ad6c6180007bcf4:1
Is Native Snapshot Capable: YES
OBJLIB-LIB: ObjLib cleanup done.

Pay close attention to the line

Ptr Blocks (max/free): 64512/36

This line gives you the available pointer blocks on the VMFS datastore. They correlate to the actual amount in GBs of the maximum allowed datastore for VMFS. Oversubscribing inflicts a comsumption of these Ptr blocks and in turn a system that has actually comsumed 20TBs of storage, but has an over consumption to 120TBs will have this issue.

Please validate that this is not your problem. It took us a week to get confirmation from VMWare that this was definitely an issue on their end and there is nothing that they extrapolate from the datastore that would give you the reason for this issue.