RAW vs QCOW2 images; VMs fail

Hey there,

I am having a problem with my images of opennebula and therefore I’m opening a new topic. Hope you don’t mind. I will tell you the problem now…
I was going to use a qcow2 image for a personal image but I ran into some problems with KVM since it didn’t want to recognize the max-size as the actual size of the image so I changed to RAW and I continued and for now I think my image is ready to be used for Opennebula but upon testing it I ran into the problem of space since RAW occupied too much space and was killing the space in the datastore of the frontend and as for now cannot add more space to it, so I decided to change again to qcow2 and save some space. Now I wanted to create the image and the template. So I performed onetemplate and oneimage and the image says rdy and the template didn’t run into any problem but it was during the onetemplate instantiate that I ran into the problem and I put the resume of the log.

Thu Apr 16 15:35:19 2015 [Z0][ReM][D]: Req:5904 UID:0 VirtualMachinePoolInfo invoked , -2, -1, -1, -1
Thu Apr 16 15:35:19 2015 [Z0][ReM][D]: Req:5904 UID:0 VirtualMachinePoolInfo result SUCCESS, “<VM_POOL>4</…”
Thu Apr 16 15:35:20 2015 [Z0][ReM][D]: Req:6256 UID:0 VirtualMachinePoolInfo invoked , -2, -1, -1, -1
Thu Apr 16 15:35:20 2015 [Z0][ReM][D]: Req:6256 UID:0 VirtualMachinePoolInfo result SUCCESS, “<VM_POOL>4</…”
Thu Apr 16 15:35:26 2015 [Z0][TM][D]: Message received: TRANSFER SUCCESS 8 -

Thu Apr 16 15:35:26 2015 [Z0][VMM][D]: Message received: LOG I 8 ExitCode: 0

Thu Apr 16 15:35:26 2015 [Z0][VMM][D]: Message received: LOG I 8 Successfully execute network driver operation: pre.

Thu Apr 16 15:35:28 2015 [Z0][VMM][D]: Message received: LOG I 8 Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/100/8/deployment.0’ ‘172.18.10.8’ 8 172.18.10.8

Thu Apr 16 15:35:28 2015 [Z0][VMM][D]: Message received: LOG I 8 error: Failed to create domain from /var/lib/one//datastores/100/8/deployment.0

Thu Apr 16 15:35:28 2015 [Z0][VMM][D]: Message received: LOG I 8 error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=/var/lib/one//datastores/100/8/disk.0,if=none,id=drive-ide0-0-0,format=qcow2,cache=none: could not open disk image /var/lib/one//datastores/100/8/disk.0: Could not open ‘/var/lib/one//datastores/100/8/disk.0’: Is a directory

So the VM shows fail; before I tried and I discovered I had to change the driver used in the image from the raw default to qcow2, so I changed and I was hopeful that it was the only problem, but the problem continued. Looking into it, it seems the problem is the process for qemu-system, it says the disk.0 is a directory, I went to the host and checked if this was indeed a directory and I found that it was and inside it there was the qcow2 image in question. So, I’m thinking it might be a problem with the parameters and how they are sent. Some important things to notice is that the datastore that I’m currently using is a ssh one (because I was thinking of continuing with raw and preferred the I/O that you claim) but I saw that you have a specific one for qcow2 in the documentation but I’m not really sure how to deploy it.

In short, I need to know if there’s a fix for this and what steps should I follow if any. Thanks in advance for all the suport. Regards!

Hi.

Some times i have the same problem, but only if i convert from ova to qcow2 using qemu-img

If you make a “file image.qcow2” the output say that file is a qcow2? because most of the times the problem is that the convert procedure was wrong and the file isn`t qcow2

Hey Alejandro

I did the file and I actually get this:

hadoop_namenode.qcow2: QEMU QCOW Image (unknown version)

Which made me think, so I created a new empty image from scratch just to get a reference if this is what I should see and here is the result

:~$ file test.qcow2
test.qcow2: QEMU QCOW Image (unknown version)

So… I’m guessing the qcow2 image is correct and other thing is actually off. Thanks for the reply!

mmmm how do you create the qcow2?

because, my qcow2 look like:

file centos7.qcow2
centos7.qcow2: Qemu Image, Format: Qcow , Version: 2

some times when i create a image the result its a qcow3… i need to set compat option.

for example.

$ qemu-img create -f qcow2 sample1.qcow2 1G
Formatting ‘sample1.qcow2’, fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off
$ file sample1.qcow2
sample1.qcow2: Qemu Image, Format: Qcow (v3), 1073741824 bytes

but using compat option

$ qemu-img create -f qcow2 -o compat=0.10 sample2.qcow2 1G
Formatting ‘sample2.qcow2’, fmt=qcow2 size=1073741824 compat=‘0.10’ encryption=off cluster_size=65536 lazy_refcounts=off
$ file sample2.qcow2
sample2.qcow2: Qemu Image, Format: Qcow , Version: 2

maybe the compat option can help you to have a good volume.

Thank your for the fast reply Alejandro

This is how I did mine for the testing

qemu-img create -f qcow2 test.qcow2 5G

And that’s what I showed you with my file. But I tried your suggestion and here’s the result:

:~$ qemu-img create -f qcow2 -o compat=0.10 test.qcow2 5G
Formatting ‘test.qcow2’, fmt=qcow2 size=5368709120 compat=‘0.10’ encryption=off cluster_size=65536 lazy_refcounts=off
:~$ file test.qcow2
test.qcow2: QEMU QCOW Image (v2), 5368709120 bytes

So yeah, I’m guessing that solved the mistery of the version for qcow2, I will make the movements necessary and update if I ran into problems again.
Thank you so much!

1 Like

Hello There!

So sadly, it is not that as it still show this error when I instantiate the template

Fri Apr 17 15:42:21 2015 [Z0][InM][D]: Host 172.18.10.8 (1) successfully monitored.
Fri Apr 17 15:42:23 2015 [Z0][VMM][D]: Message received: LOG I 10 Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/100/10/deployment.0’ ‘172.18.10.8’ 10 172.18.10.8

Fri Apr 17 15:42:23 2015 [Z0][VMM][D]: Message received: LOG I 10 error: Failed to create domain from /var/lib/one//datastores/100/10/deployment.0

Fri Apr 17 15:42:23 2015 [Z0][VMM][D]: Message received: LOG I 10 error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=/var/lib/one//datastores/100/10/disk.0,if=none,id=drive-ide0-0-0,format=qcow2,cache=none: could not open disk image /var/lib/one//datastores/100/10/disk.0: Could not open ‘/var/lib/one//datastores/100/10/disk.0’: Is a directory

I tried instantiating one that I have in RAW format (thinking it might just not be the qcow2 which is the one making the trouble) and SUCCESS it can actually run… I don’t know what I’m missing. I installed the .deb provided by OpenNebula for this images, maybe is something that I’m missing installing or preparing for this images to work with OpenNebula?
Thank you in advance and sorry for making so much trouble :frowning:

Hello there!

Soooo, during the weekend I heve been checking back and forth with my images. I’ve even created new images in raw (lesser ones, with context and without context) I have download the Ubuntu image from the marketplace (which it turns out is in qcow2) and nothing still getting an error and is exactly the same. I am missing something however… is creating the image from a solid image and not a tar.gz (which I think is not a bright thing to do since I have seen that OpenNebula untars it and put the image as it in the datastore).
Another “mysterious” behaviour that I have seen is that every image I create and delete seems to be occupying still space in the datastore, I enter the folder of the datastore and delete manually the image but it seems is still occupying a bit of space each time and is consuming my space very fast; Why I create and delete image is because everytime I do oneimage update or onetemplate update I modify and get broken pipe message. Any ideas?
Again, thank you very much for your help. Hope you can give me an advise or fix for this.

Morning!

I think I finally discovered the mistery behind these errors!!! So I checked the datastore since I have a tiny linux image working perfectly fine and discovered that it was created as a file while the qcow2 and raws that I zipped where inside folders; and I did the final test unzipping my test image I created and creating the image and checked the datastore first and guess what? it created it as a file! So, I created a template to instantiate such image and test it with my virt-viewer to check that it started pperfectly fine in the host… and it did!!
So, I think this resolves the problem, however I think this should be reported as a bug or something, since in the documentation it says:

Note that gzipped files are supported and OpenNebula will automatically decompress them. Bzip2 compressed files is also supported, but it’s strongly discouraged since OpenNebula will not calculate it’s size properly.

Which it seems is not true as it actually decompress them but it puts them in a folder instead of turning them into the actual image that it can manage. The only problem left now for me is that OpenNebula actually expands the actual size of the final image (I tried the Ubuntu one in the marketplace which is 1,5 G in size but when it got created by oneimage it expanded to 10G) which will consume all the space in my frontend but that will actually be my problem now…
Thank you guys for listening and for your notable contributions towards a successful answer to this problem.

The problem is the image inside the folder in a tar or a zip archive.

tar and zip are supported as VMware images usually contain more than one file. These image are correctly processed by the driver and the image is in fact a directory.

For other images (raw and qcow2) they can be compressed using gzip but should not be inside an archive, like tar or zip. Note that even gzip and zip are related (the name is similar and the compression algorithm is the same), they are not the same. gzip compresses just one file. zip is an archive format that can contain more than one file and directories.

As a rule of thumb you should never use zip or tar with raw/qcow2 images.

Concerning the disk space, you can use qcow2 images that will grow as needed and consume much less space. For example to convert a raw image to a compressed qcow2 file you can use this command:

$ qemu-img convert -f raw -O qcow2 -c image.raw image.qcow2

These compressed images can be used directly as golden images. Also to make even better use of the space you can use the qcow2 drivers.

Hello Javi

Thank you very much for the clarification! I did use QCOW2 images in the end, however I should note that the front end actually expands the image to its full size (though in the end it just uses the actual size of the qcow2 image) and runs a warning saying that it doesn’t have enough space and won’t create the image in the datastore. I already missed the chance to show what do I actually saw but I will try to exemplify:

QCOW2 image size 3,4G
QCOW2 image actual size 15G
datastore 101 with only 10G left
$: oneimage create my_image.oneimg -d 101
Warning: not enough space in the datastore

That is basically what I saw and what I had; I already manage to get more space in the datastore by mounting another drive and now the oneimage list shows:

21 oneadmin oneadmin Hadoop_NameNode test_fs 15G OS No used 1
22 oneadmin oneadmin Hadoop_Resource test_fs 15G OS No rdy 0
23 oneadmin oneadmin Hadoop_Worker test_fs 15G OS No rdy 0

And, thankfully, the datastore shows the following

DATASTORE CAPACITY
TOTAL: : 54.1G
FREE: : 41.1G
USED: : 10.3G

Meaning that in the very end the datastore only got consumed with the image size and not the actual size (3 * 3,4 = 10,2G, I also have a tiny linux from the marketplace for testing purpouses).
All in all, thank you to everybody that helped me through this topic, it will probably be a good idea to add this clarification by Javi to the documentation as it can help other newbies such as myself.

Hello everyone,

I would get some advise about what would be the best initial configuration (Disk driver, Disk Format, Device Prefix) for using all functionalities like (Hot Snapshot, Deferred Snapshot, Hotplug Volatile or non Volatile Disk).
I am currently using default RAW driver which one doesn’t allow Snapshot:

error: unsupported configuration: internal snapshot for disk hdb unsupported for storage type raw

Also, I can’t Snapshot a volatile disk in order there is no “Snapshot button” for volatile disk. That could be a good functionality to create a disk clone from a volatile disk, what do you think about ?

Regards,

Hello,

Just to update the POST.
Nobody have any idea about this ?
I’m still waiting to a reply.

Regards,

If you create your images like this:
sudo qemu-img create -f qcow2 /tmp/20G-noprealloc.qcow2 20G
You get a small image, that will grow in size until it reaches max 20 GB.

But if you create it using this:
sudo qemu-img create -f qcow2 -o preallocation=metadata /tmp/20G-prealloc.qcow2 20G
All available space is filled up and it will immediately take up 20 GB of space.
This should be a faster disk, but like with RAW, it makes the deployment of any VM slow

it depends on ur personal preference and storage-backend, but I would use qcow2 as default image-format.
supports snapshotting etc etc. Make sure to use “vd” as image prefix, so it uses virtio drivers for the image.

BTW - If you need help, start a new topic. If you reply here, your question will be buried in someone else’s topic with a different problem.
Just a tip :blush: