Simple deploy example fail

Hello everyone,
I am really new to OpenNebula. I have followed the instructions in the official documentation to use minione on a single host without any major inconveniences. My problem arose when I tried to move to a multi-node environment using Virtualbox and 3 nodes: frontend, host0 and host1.

I would appreciate any help!

The problem…
I could add a VM to host0 using Alpine Linux image. Although the VM is in RUNNING state and it has its own ip, I can’t reach it when I try to ping it from the host0 console… so, I can’t ssh to it. The VNC connection from Sunstone fails with a tunneling error. The VM logs doesn’t show any error.

Versions of related components and OS:

  • OS: Ubuntu 22.04 (Desktop Edition)
  • Network settings: netplan & networkd
  • Nodes: Virtualbox VM with 2 cores, 4GB RAM, 100GB hard drive, 1 NIC (bridge mode), virtual engine enabled.
  • OpenNebula 6.6.0 (b7b662b5)
  • VM image on Sunstone: alpine linux 3.17

Steps to reproduce:

  • Create 3 VMs on Virtualbox with the above configuration.
  • Install a fresh Ubuntu 22.04 DE on the VMs.
  • Set the network to static ip address in my local network (, gateway and nameserver to reach the internet.
  • Enable the resolved and networkd services and disable the NetworkManager service.
  • Update and upgrade apt
  • Add 3 IPs to /etc/hosts

On the frontend:

  • download & run minione (with --frontend parameter)
  • Install the NFS server package
  • Export /var/lib/one/database

On the hosts:

  • Create a virtual NIC device and a bridge to it, setting a private ip address (
  • Install the opennebula-node and opennebula-node-kvm packages
  • Set /etc/libvirt/libvirtd.conf to oneadmin sock group and 0777 permissions
  • Set /etc/apparmor.d/abstractions/libvirt-qemu to allow /var/lib/one/datastores/** rwk,
  • Restart libvirtd
  • Set the nfs-client configuration to mount /var/lib/one/database directory from frontend

On frontend and nodes:

  • Set SSH to passwordless

On the Sunstone interface:

  • Add hosts host0 and host1 to the infrastructure, setting both as KVM nodes.
  • Create a virtual network named “Private” in bridge mode using minionebr bridge, and ip range 172.16.100.x
  • Add VM to host0 using the Alpine Linux image and set it to use the “Private” virtual network.

Current results:
I have checked:

  • kvm-ok
  • ssh connections to oneadmin user from the frontend to both hosts, from both hosts to the frontend, and from each host to the other.
  • Both hosts are in the ON state.
  • The VM reaches the RUNNING state and has its own private ip (
  • The VM does not log any errors.

I have tried:

  • Switch from Virtualbox to VMware.
  • Change Ubuntu version
  • Add another NIC device to the Vbox VM.
  • Set the Vnet range on the local network (192.168.0.x)
  • Connect to the VM from virsh

Expected results:
I expect to be able to ssh to the VM from host0 and/or from the sunstone interface. Then, to be able to migrate the VM from host0 to host1.