Debian 10 (Buster) node behaves odd

Hi,

a fresh install of ONe 5.10.1 on a Debian 10 node behaves quite different than I’m used to with Ubuntu 16.04/18.04. It starts with “virsh list” being empty when run from the oneadmin user; under Ubuntu, this just gives the list of VMs. Starting a VM on Debian 10 yields in “Boot failed: not a bootable disk”.


Versions of the related components and OS (frontend, hypervisors, VMs): Frontend: opennebula 5.10.1-1 on Debian 10u2-VM created with virt-manager running on host “vmhost04”; Hypervisor: “vmhost04” running Debian 10u2, opennebula-node 5.10.1-1.

Steps to reproduce: Take fresh Debian 10u2 Host, install opennebula-node as per Documentation, try “su - oneadmin -c ‘virsh list’”. Furthermore, try to boot a VM via ONe Frontend on this host; doesn’t work here, BIOS says no boot disk, ONe Frontend says “runn”.

Current results:

root@vmhost04 ~ # echo "As oneadmin user:" ; su - oneadmin -c "virsh list" ; echo ; echo "Meanwhile as root:" ; echo ;  virsh list ; uname -a
As oneadmin user:
 Id   Name   State
--------------------


Meanwhile as root:

 Id   Name    State
-----------------------
 1    one-a   running
 2    one-1   running

Linux vmhost04 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux

Expected results:

root@ubuntu1604:~# echo "As oneadmin user:" ; su - oneadmin -c "virsh list" ; echo ; echo "Meanwhile as root:" ; echo ;  virsh list ; uname -a
As oneadmin user:
 Id    Name                           State
----------------------------------------------------
 2     one-203                        running
 3     one-250                        running
 5     one-236                        running
 6     one-117                        running
 7     one-266                        running
 8     one-221                        running
 10    one-275                        running
 12    one-285                        running
 13    one-245                        running


Meanwhile as root:

 Id    Name                           State
----------------------------------------------------
 2     one-203                        running
 3     one-250                        running
 5     one-236                        running
 6     one-117                        running
 7     one-266                        running
 8     one-221                        running
 10    one-275                        running
 12    one-285                        running
 13    one-245                        running

Linux ubuntu1604 4.15.0-66-generic #75~16.04.1-Ubuntu SMP Tue Oct 1 14:01:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Not sure why it is working on ubuntu. I am always using the following:

# su - oneadmin -c 'source /var/tmp/one/etc/vmm/kvm/kvmrc; virsh --connect $LIBVIRT_URI list'
 Id    Name                           State
----------------------------------------------------
 1     one-4                          running
 2     one-494                        running
 3     one-143                        running
 4     one-487                        running
 6     one-22                         running
 7     one-18                         running
 8     one-21                         running
 9     one-468                        running
 10    one-16                         running
 11    one-470                        running
 12    one-15                         running
 13    one-418                        running

Hope this helps.

Best Regards,
Anton Todorov

Sorted this one: I changed a “fat” qcow2 image to a sparse qcow2 image (15 GB => 1,2 GB) before uploading via ONe Frontend; although it still identified as a qcow2 image, booting wasn’t possible, even outside of OpenNebula. Fixed this, so I can not boot VMs (and Containers!).

Uh, um. So you mean oneadmin isn’t supposed to use “virsh”?

Well,

When debugging issues I prefer to run the commands following how OpenNebula uses them rather than the “canonical” way. This is helping to reduce the errors due to miss-configuration, for example.

Cheers,
Anton