Problems starting VM from template

Hi,

I’m experimenting with a brand new OpenNebula 4.12 all-in-one install on Ubuntu 15.04 and seem to be having an issue with getting any VMs to start. I’ve created storage and templates but when deploying VMs they immediately go to failed. The logs show the below:

Tue Jun 9 23:11:16 2015 [Z0][DiM][I]: New VM state is ACTIVE.
Tue Jun 9 23:11:16 2015 [Z0][LCM][I]: New VM state is PROLOG.
Tue Jun 9 23:11:17 2015 [Z0][LCM][I]: New VM state is BOOT
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/12/deployment.0
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: ExitCode: 0
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/altairzfsmain/VMs/system/100/12/deployment.0’ ‘altair’ 12 altair
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: error: Failed to create domain from /altairzfsmain/VMs/system/100/12/deployment.0
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: error: internal error: process exited while connecting to monitor: 2015-06-09T22:11:17.658127Z qemu-system-x86_64: -drive file=/altairzfsmain/VMs/system/100/12/disk.0,if=none,id=drive-ide0-0-1,format=qcow2,cache=none: file system may not support O_DIRECT
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: 2015-06-09T22:11:17.658337Z qemu-system-x86_64: -drive file=/altairzfsmain/VMs/system/100/12/disk.0,if=none,id=drive-ide0-0-1,format=qcow2,cache=none: could not open disk image /altairzfsmain/VMs/system/100/12/disk.0: Could not open ‘/altairzfsmain/VMs/system/100/12/disk.0’: Invalid argument
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]:
Tue Jun 9 23:11:17 2015 [Z0][VMM][E]: Could not create domain from /altairzfsmain/VMs/system/100/12/deployment.0
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: ExitCode: 255
Tue Jun 9 23:11:17 2015 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Tue Jun 9 23:11:17 2015 [Z0][VMM][E]: Error deploying virtual machine: Could not create domain from /altairzfsmain/VMs/system/100/12/deployment.0
Tue Jun 9 23:11:17 2015 [Z0][DiM][I]: New VM state is FAILED

Listing of the images:
oneadmin@altair:/altairzfsmain/VMs/system/101$ ls -l
total 93847
-rw-r–r-- 1 oneadmin oneadmin 108419072 Jun 3 08:33 23593b00380a15965609dec11d73eac5
-rw-r–r-- 1 oneadmin oneadmin 1048576001 Jun 9 15:47 26f3472d6ce367902b1f3148d40733de
-rw-r–r-- 1 oneadmin oneadmin 0 Jun 9 23:02 3ab68c6be4ee3be76643b0dddfc3de30
-rw-r–r-- 1 oneadmin oneadmin 197120 Jun 3 08:34 572caa85c4261e0a9d5506d0e03e556e
-rw-r–r-- 1 oneadmin oneadmin 197120 Jun 9 23:08 7711af5b3c9eb59beaa260cfb63c515f
-rw-r–r-- 1 oneadmin oneadmin 0 Jun 9 23:02 c765a0c4356d4f07b79c46c2dcc474a6

7711af… is the blank disk associated with the above VM that tried to boot:
oneadmin@altair:/altairzfsmain/VMs/system/101$ file 7711af5b3c9eb59beaa260cfb63c515f
7711af5b3c9eb59beaa260cfb63c515f: QEMU QCOW Image (v3), 2202009600 bytes

Anyone have any ideas what I may be missing? :smile:

Thanks,

John

hi, here is described:
http://docs.opennebula.org/4.12/design_and_installation/quick_starts/qs_ubuntu_kvm.html

That oneadmin should be able to control libvirt, it seems it cant now.

Can you check if this is setup correctly ?

2.5. Configure Qemu
The oneadmin user must be able to manage libvirt as root:

cat << EOT > /etc/libvirt/qemu.conf
user = "oneadmin"
group = "oneadmin"
dynamic_ownership = 0
EOT

Restart libvirt to capture these changes:

service libvirt-bin restart

Hi,

I’ve already applied the changes to the qemu.conf file:

> # more /etc/libvirt/qemu.conf
> user = "oneadmin"
> group = "oneadmin"
> dynamic_ownership = 0

The first error seemed to indicate that, but since you did configure it, that cant be it.
So it must be related to the second part of the log, which is the (attempted) use of the image.

stuff like:

  • file system may not support O_DIRECT
  • Could not open ‘/altairzfsmain/VMs/system/100/12/disk.0’: Invalid argument

are where I would look next

Thanks for the help Roland - your advice led me to the correct answer!

I’m using ZFS as the file system for the VM images. It seems that there’s a requirement with KVM and ZFS to set the cache for each of the disks to “writeback” otherwise you get the “could not open disk image, invalid argument” error that I was seeing.

Answer courtesy of http://forum.proxmox.com/threads/17784-solved-VM-won-t-boot-if-located-in-a-ZFS-directory and http://pve.proxmox.com/wiki/ZFS#kvm_tuning

weird, I’m using ZFS as well (on OpenIndiana with NFS) and never had this exact problem before.

I wasnt aware, so thanks for sharing the solution for a problem I will probably soon encounter :wink: