I am very much new to OpenNebula. I have setup a private on OpenNebula with xen hypervisor. The problem is I am unable to instantiate a VM. Below is the error log. Please Help
Tue May 31 14:52:02 2016 [Z0][DiM][I]: New VM state is ACTIVE.
Tue May 31 14:52:03 2016 [Z0][LCM][I]: New VM state is PROLOG.
Tue May 31 14:52:30 2016 [Z0][LCM][I]: New VM state is BOOT
Tue May 31 14:52:30 2016 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/10/deployment.0
Tue May 31 14:52:31 2016 [Z0][VMM][I]: ExitCode: 0
Tue May 31 14:52:31 2016 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Tue
May 31 14:52:34 2016 [Z0][VMM][I]: Command execution fail: cat <<
EOT | /var/tmp/one/vmm/xen4/deploy
’/var/lib/one//datastores/0/10/deployment.0’ ‘192.168.100.4’ 10
192.168.100.4
Tue May 31 14:52:34 2016 [Z0][VMM][I]: libxl: error:
libxl_dm.c:717:libxl__build_device_model_args_new: qemu-xen doesn’t
support read-only disk drivers
Tue May 31 14:52:34 2016 [Z0][VMM][I]: libxl: error: libxl_dm.c:1393:device_model_spawn_outcome: (null): spawn failed (rc=-3)
Tue
May 31 14:52:34 2016 [Z0][VMM][I]: libxl: error:
libxl_create.c:1189:domcreate_devmodel_started: device model did not
start: -3
Tue May 31 14:52:34 2016 [Z0][VMM][I]: libxl: error:
libxl_dm.c:1489:kill_device_model: unable to find device model pid in
/local/domain/6/image/device-model-pid
Tue May 31 14:52:34 2016 [Z0][VMM][I]: libxl: error: libxl.c:1421:libxl__destroy_domid: libxl__destroy_device_model failed for 6
Tue May 31 14:52:34 2016 [Z0][VMM][E]: Unable
Tue May 31 14:52:34 2016 [Z0][VMM][I]: ExitCode: 3
Tue May 31 14:52:34 2016 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Tue May 31 14:52:34 2016 [Z0][VMM][E]: Error deploying virtual machine: Unable
Tue May 31 14:52:34 2016 [Z0][DiM][I]: New VM state is FAILED
Hi,
well I just wanted to file the same forum post.
We are obviously experiencing the same issue.
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 Command execution fail: cat << EOT | /var/tmp/one/vmm/xen4/deploy ‘/var/lib/one//datastores/0/47/deployment.0’ ‘stortest2.linbit’ 47 stortest2.linbit
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 libxl: error: libxl_dm.c:717:libxl__build_device_model_args_new: qemu-xen doesn’t support read-only disk drivers
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 libxl: error: libxl_dm.c:1393:device_model_spawn_outcome: (null): spawn failed (rc=-3)
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 libxl: error: libxl_create.c:1189:domcreate_devmodel_started: device model did not start: -3
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 libxl: error: libxl_dm.c:1489:kill_device_model: unable to find device model pid in /local/domain/15/image/device-model-pid
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 libxl: error: libxl.c:1421:libxl__destroy_domid: libxl__destroy_device_model failed for 15
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG E 47 Unable
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 ExitCode: 3
Wed Jun 1 17:16:35 2016 [Z0][VMM][D]: Message received: LOG I 47 Failed to execute virtualization driver operation: deploy.
I did some experiments with the disk driver already. i tried to explicitely set the driver for the boot disk to
tap2:tapdisk:aio:
this is the full configuration of my template. am i missing something?
oneadmin@stortest1:~$ onetemplate show 0
TEMPLATE 0 INFORMATION
ID : 0
NAME : CentOS-6.5-nfs-XEN
USER : oneadmin
GROUP : oneadmin
REGISTER TIME : 01/21 15:12:28
hi, im not a xen user, so cant tell if this will help, but it’s worth a try.
in /etc/one/oned.conf is a default setting for device prefix for disks. I think the default is set to “hd”.
If I understand correctly, prefix hd = ide, prefix sd = sata and prefix vd = virtio (for kvm).
You can also set the prefix in the attributes of the image itself, or you could change the value in oned.conf, so it is default for all your images.
so prefix: hd / sd / vd
and target: hda / sdb / vda
hope this helps!
EDIT: here is the relevant part in /etc/one/oned.conf:
#*******************************************************************************
# DataStore Configuration
#*******************************************************************************
# DATASTORE_LOCATION: Path for Datastores. It IS the same for all the hosts
# and front-end. It defaults to /var/lib/one/datastores (in self-contained mode
# defaults to $ONE_LOCATION/var/datastores). Each datastore has its own
# directory (called BASE_PATH) in the form: $DATASTORE_LOCATION/<datastore_id>
# You can symlink this directory to any other path if needed. BASE_PATH is
# generated from this attribute each time oned is started.
#
# DATASTORE_CAPACITY_CHECK: Checks that there is enough capacity before
# creating a new image. Defaults to Yes
#
# DEFAULT_IMAGE_TYPE: This can take values
# OS Image file holding an operating system
# CDROM Image file holding a CDROM
# DATABLOCK Image file holding a datablock, created as an empty block
#
# DEFAULT_DEVICE_PREFIX: This can be set to
# hd IDE prefix
# sd SCSI
# vd KVM virtual disk
#
# DEFAULT_CDROM_DEVICE_PREFIX: Same as above but for CDROM devices.
#*******************************************************************************
#DATASTORE_LOCATION = /var/lib/one/datastores
DATASTORE_CAPACITY_CHECK = "yes"
DEFAULT_IMAGE_TYPE = "OS"
DEFAULT_DEVICE_PREFIX = "hd"
DEFAULT_CDROM_DEVICE_PREFIX = "hd"
But I already did a lot of testing with different device prefixes and different drivers yesterday.
In my oned.conf, there is one more possible option in the comments.
# DEFAULT_DEVICE_PREFIX: This can be set to # hd IDE prefix # sd SCSI # xvd XEN Virtual Disk # vd KVM virtual disk
I actually did not set the default in oned.conf but I set it in the image itself. The outcome should be the same. I tried sd, I tried xvda, always the same error. Also tried with the default driver for XEN: raw
oneadmin@stortest1:~$ oneimage show 0
IMAGE 0 INFORMATION
ID : 0
NAME : CentOS-6.5-nfs_x86_64
USER : oneadmin
GROUP : oneadmin
DATASTORE : image-nfs
TYPE : OS
REGISTER TIME : 01/21 15:11:40
PERSISTENT : No
SOURCE : /var/lib/one//datastores/1/10777a4812f3dd3ab1d0983454dcad70
PATH : http://appliances.c12g.com/CentOS-6.5/centos6.5.qcow2.gz
SIZE : 267M
STATE : rdy
RUNNING_VMS : 0
Hi Arshad,
only recently I found a solution yes I definitely wanted to post it here but I did not have the time yet.
the problem is that xen sd driver definitely cannot handle readonly devices, if you dont use a readonly device it should just work! when you have the “context” checkboxes on in your template, then automatically a readonly disk is attached to your vm. this is causing the qemu-xen readonly blabla error!!!
try to remove every “context” checkboxes in the opennebula gui! then the readonly iso device should be left out.
instantiate your template, and then check how exactly your xen vm definition looks like. it should be in your system datastore, usually this is the path (134 is the vm id).
if you also want to use “contextualization”, opennebula certainly somehow must attach the readonly iso file.
i got it to work when using the following settings:
In my template in the OS section I set “pygrub” as the bootloader:
BOOTLOADER=pygrub
for the DISK I set:
DEV_PREFIX="xvd"
DRIVER=“raw”
try these two possible ways to get it work and post here how it works.
Hope that helps!
all the best
Jojo
make sure to monitor oned.log while provisioning your template.
if it throws an error it will show you the path to the xen configuration of the vm (’/var/lib/one//datastores/0/<VM-ID>/deployment.0’)
I would suggest you post teh contents of this file here, it will probably help to figure out the problems.
Hi Arshad,
I’ve seen this error but can’t reproduce it at the moment.
As far as I understand, pygrub is a bootloader that sits “outside” of your Xen-VM (on your Xen-hypervisor! not as normal grub, which sits in the MBR or start of partition of your virtual disk).
pygrub is able to load the kernel of the VM that lies “inside” the VM. More detailed explainations here:
I suppose that in the past a typical way of booting Xen VMs was to have the bootloader AND the kernels outside of the VM, lying around on your hypervisor, which is tedious to handle, upgrade kernels, etc.
I am just reading this thread, and I must add that when you are using pygrub then you are launching your machine in Paravirtual mode, while when you are not specifying it you would be running in fully virtualized HVM mode. This is hinted here: http://docs.opennebula.org/4.14/administration/virtualization/xeng.html#usage
Choosing PV or HVM has implications in the performance of the VM and also on how it interacts with the HV below so choose wisely…
And yes, this seems to be a problem after the fix for http://xenbits.xen.org/xsa/advisory-142.html . This makes it difficult to run HVM-mode with Context disks on OpenNebula (which is much of a default). A workaround is to manually edit the deployment file to mark the context disk as ‘w’[ritable], then create the VM manually and finally recover it from the FAILURE state.