Boot failed: not a bootable disk / no bootable device after installing a Ubuntu image through ON

Hello all,

I am just testing how to create a Ubuntu image from scratch through ON. For that, I just have created a template that has 2 disks, one as cd-rom which is the ubuntu iso and disk2 as datablock and where the Ubuntu Os is installed. Also this disk2 is marked as persistent. The installation was ok apparently and after finishing the installation, I just have changed the type of the image where the ubuntu OS was installed from datablock to OS and I use another template to launch the image created, until here, everything is normal (I did not do any snapshot since disk2 is marked as persistent). The issue comes once I instantiate the image created because I get the issue from below. Also, I tried to install setting up disk 2 as OS but I get the same error:

booting from hard disk   
boot failed: not a bootable disk
no bootable device

I remember I already had this issue on the past but unfortunately, I don’t remember the solution. I was playing with a lot of the modifications on the VM/image template and trying several installations but I don’ t get a image working so I start to be a bit mad/desperate with this because I don’ t know what I am doing wrong, so please, any advice/help is welcome :smile:

These are the templates that I am using:

[oneadmin@opennebula4 ~]$ oneimage show 51
[ ...]                                                                
DEV_PREFIX="vd"
DRIVER="qcow2"
TARGET="vda"
TYPE="OS"

[ oneadmin@opennebula4 ~]$ oneimage show 52
[...]   
TYPE           : CDROM 

[oneadmin@opennebula4 ~]$ onetemplate show 28
[ ... ]                                                            
CONTEXT=[NETWORK="YES"]
CPU="1"
DISK=[IMAGE="Installation_test_esteban_UbuntuISO", IMAGE_UNAME="oneadmin", READONLY="yes" ]
DISK=[IMAGE="Installation_test_esteban_UbuntuOS", IMAGE_UNAME="oneadmin" ]
FEATURES=[ACPI="yes" ]
GRAPHICS=[LISTEN="0.0.0.0", TYPE="VNC" ]
HYPERVISOR="kvm"
LOGO="images/logos/ubuntu.png"
MEMORY="2048"
NIC=[NETWORK="oort_int_network", NETWORK_UNAME="oneadmin" ]
OS=[ARCH="x86_64",BOOT="cdrom,hd" ]
SUNSTONE_CAPACITY_SELECT="YES"
SUNSTONE_NETWORK_SELECT="YES"

oneadmin@opennebula4 ~]$ onetemplate show 29
[ ... ]                                                          
CONTEXT=[NETWORK="YES"]
CPU="1"
DISK=[BUS="virtio",CACHE="none",DRIVER="qcow2",IMAGE="Installation_test_esteban_UbuntuOS",IMAGE_UNAME="oneadmin",TYPE="OS" ]
DISK=[BUS="virtio",CACHE="none",SIZE="2148",TARGET="hdb",TYPE="swap" ]
FEATURES=[ACPI="yes" ]
GRAPHICS=[LISTEN="0.0.0.0",TYPE="VNC" ]
LOGO="images/logos/ubuntu.png"
MEMORY="2048"
NIC=[NETWORK="oort_int_network",NETWORK_UNAME="oneadmin" ]
OS=[ARCH="x86_64",ROOT="hda1" ]
RAW=[DATA="<cpu mode='host-passthrough'/>",TYPE="kvm" ]

Hi @esfreire

mmm i think your protocol is ok. (sorry not found any at our wiki… you know our kaos inside hehe)

looking the doc i found at [1]

Install within OpenNebula. You can also use OpenNebula to prepare the images for your cloud. The process will be as follows:
Add the installation medium to a OpenNebula datastore. Usually it will be a OS installation CD-ROM/DVD.
Create a DATABLOCK image of the desired capacity to install the OS. Once created change its type to OS and make it persistent.
Create a new template using the previous two images. Make sure to set the OS/BOOT parameter to cdrom and enable the VNC console.
Instantiate the template and install the OS and any additional software
Once you are done, shutdown the VM

I still looking about, because is so strange

UPDATED: can you check if changing VDA to SDA or HDA thats works… i remember that some times bus driver failed and we can boot using HD

UPDATED2: we have installed from scrach and old ubuntu and it works using vda… the same config:
template

DISK=[
BUS=“virtio”,
CACHE=“none”,
DRIVER=“qcow2”,
IMAGE_ID=“220”,
TARGET=“vda”,
TYPE=“OS” ]

and the image:

CACHE=“none”
DESCRIPTION=“Ubuntu 12.04 prealloc”
DEV_PREFIX=“vd”
DRIVER=“qcow2”
FSTYPE=“""”
PUBLIC=“NO”
TARGET=“vda”

1: Adding Content to Your Cloud — OpenNebula 4.12.1 documentation

1 Like

Hi Esteban

some tips here. It’s better to move all the DISK attributes to the image template itself, is more clear and you don’t have to repeat the same values in each VM template. So you can modify Installation_test_esteban_UbuntuOS and include these values (BUS att is deprecated I think):

DEV_PREFIX=“vd”
CACHE = “none”
TARGET=“vda”
DRIVER = “qcow2”

I think that libvirt tries to find the MBR in your swap disk instead of vda, try to remove the swap disk and instantiate the VM again. If that is the case change the swap partition to:

DISK=[CACHE=“none”,SIZE=“2148”, TARGET=“vdb”, DEV_PREFIX=“vd”, TYPE=“swap” ]

and OS to:

OS=[ARCH=“x86_64”,BOOT=“hd” ]

I hope that this will help :smiley:

Cheers
Alvaro

Hi all,

First of all, I would like to give special thanks to @alfeijoo because without his help it would be pretty difficult to discover this issue. Yesterday, we connected through Skype and we followed the same steps in order to create a Ubuntu VM from scratch through OpenNebula Sunstone UI and to see the differences between both OpenNebula instances.

We observed the followed two things:

  • The first one is that when creating a new image from SunStone and following the steps from http://docs.opennebula.org/4.12/user/virtual_machine_setup/add_content.html, if you specify qcow2 format, the image is always created in qcow3 format anyway. This seems a bug already reported in https://bugzilla.redhat.com/show_bug.cgi?id=1119929. On the Sunstone/ON servers we are using CentOS Linux release 7.0.1406 OS and all the packages are updated to the last version. Has anyone else experimented this issue? I don’t know if someone could report it or if I could report it.

  • The second thing and the reason why my installation was failing is the following one:

According to link http://docs.opennebula.org/4.12/user/virtual_machine_setup/add_content.html, it says that in the template created to read from the cdrom and install the OS in the new image created, it is necessary to set the OS/BOOT parameter to cdrom. If I do that, it creates the following deployment file for KVM:

 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
 <name>one-150</name>
  <cputune>
   <shares>2048</shares>
 </cputune>
 <memory>2097152</memory>
 <os>
    <type arch='x86_64'>hvm</type>
    <boot dev='cdrom'/>
 </os>
 <devices>
   <emulator>/usr/bin/qemu-kvm</emulator>
       <disk type='file' device='disk'>
         <source file='/var/lib/one//datastores/103/150/disk.0'/>
         <target dev='vda'/>
         <driver name='qemu' type='raw' cache='none'/>
     </disk>
     <disk type='file' device='cdrom'>
        <source file='/var/lib/one//datastores/103/150/disk.1'/>
        <target dev='hda'/>
        <readonly/>
    <driver name='qemu' type='raw' cache='none'/>
    </disk>
    <disk type='file' device='cdrom'>
        <source file='/var/lib/one//datastores/103/150/disk.2'/>
        <target dev='hdb'/>
        <readonly/>
    <driver name='qemu' type='raw'/>
  </disk>
[ .... ]

This deployment file does not work for me because after finishing the installation, it always tries to read from dev cdrom so it does not see the new installation after rebooting and even if the image created to save the installation is persistent, it does not save anything when I make a shutdown of the VM because it is not seeing the hdd and also, I cannot do a snapshot if the image is not persistent because the same reason. Therefore, when I tried to make a shutdown or snapshot, ON does not give any error and in the case of a persistent image, it does not do anything and in case of a snapshot, it remains doing the snapshot forever.

On the other hand, if in the template created to read from the cdrom and install in the new image created, if iin the OS booting section for the template, I selected as 1st boot “HD’ and as second boot “CDROOM”, it creates the following deployment file for KVM which it works perfectly for me:

 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
 <name>one-150</name>
  <cputune>
   <shares>2048</shares>
 </cputune>
 <memory>2097152</memory>
 <os>
    <type arch='x86_64'>hvm</type>
     <boot dev='hd'/>
     <boot dev='cdrom'/>
 </os>
 <devices>
   <emulator>/usr/bin/qemu-kvm</emulator>
       <disk type='file' device='disk'>
         <source file='/var/lib/one//datastores/103/150/disk.0'/>
         <target dev='vda'/>
         <driver name='qemu' type='raw' cache='none'/>
     </disk>
     <disk type='file' device='cdrom'>
        <source file='/var/lib/one//datastores/103/150/disk.1'/>
        <target dev='hda'/>
        <readonly/>
    <driver name='qemu' type='raw' cache='none'/>
    </disk>
    <disk type='file' device='cdrom'>
        <source file='/var/lib/one//datastores/103/150/disk.2'/>
        <target dev='hdb'/>
        <readonly/>
    <driver name='qemu' type='raw'/>
  </disk>
[ .... ]

The DOM-0 is running a Fedora 21 OS and we have installed the following packages on it:

libvirt*-1.2.9.2-1.fc21.x86_64
qemu*-2.1.3-3.fc21.x86_64
opennebula-node-kvm-4.12.1-1.x86_64

I understand this is not an ON issue and more probably, it is an issue from the virt-manager o qemu. In any case, I would appreciate if someone could report it or just tell me where report it and also, I don’ t know if someone else has observed this issue.

Thanks in advance,
Esteban

PS: @alvaro_simongarcia, thanks a lot for your answer :slight_smile: … by the way, the three of us again in the same post, like in the old times, niceee!! :stuck_out_tongue: …hehe

1 Like

Hi @esfreire, can you please open a ticket at http://dev.opennebula.org/? If the image is persistent, ONE should save it regardless of the boot order…

Hi Carlos ( @cmartin)

Thanks for your fast answer.

I have created a new issue in order to track it, http://dev.opennebula.org/issues/3760 . I hope I had done it correctly :smile:

Regards,
Esteban

PS: As I commented on my previous post, for some reason, it it is only seeing the cd-room if I configure as first boot dev the cd-room so ON is not able to save anything because the boot dev is not the expected. I think is something on the virt-manager side.