Help with Opennebula on aarch64

Hello everyone

I’m in the process of evaluating Opennebula and wanted to use one of our Raspberry Pi 5 Bs (with 16 GB of Memory). I installed Opennebula through the minione script on debian 12 and the installation went smoothly. I modified the necessary files (based on the information on this page) and restarted opennebula.service.

I then logged into sunstone web ui and tried to create a alpine linux (aarch64) based vm. But sadly I always get the error message:

Wed Jul 9 22:36:46 2025 [Z0][VMM][D]: Message received: DEPLOY FAILURE 3 cp: cannot stat ‘/usr/share/OVMF/OVMF_VARS.fd’: No such file or directoryerror: Failed to create domain from /var/lib/one//datastores/0/3/deployment.0error: internal error: process exited while connecting to monitor: 2025-07-09T21:36:45.432664Z qemu-kvm-one: KVM is not supported for this guest CPU type2025-07-09T21:36:45.432710Z qemu-kvm-one: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid argumentCould not create domain from /var/lib/one//datastores/0/3/deployment.0ExitCode: 255

See the below configuration for the domain configuration of the vm.

I tried to manually instatiate a vm with virt-install and the following command (with a fedora aarch64 image) and that works flawlessly:

virt-install -v --name fedora-36-aarch64 --ram 4096 --disk path=f42,cache=none --nographics --os-variant fedora37 --import --arch aarch64 --vcpus 2 --network none

I tried finding solutions or further information online, but sadly came up empty. Does anybody have an idea of what I can try to make this work?

Cheers, Marc

Domain Configuration

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
        <name>one-3</name>
        <title>testy</title>
        <uuid>71396d7b-2d1e-453b-8627-09f65ea11108</uuid>
        <cputune>
                <shares>100</shares>
        </cputune>
        <memory>524288</memory>
        <os>
                <type arch='aarch64' machine='virt'>hvm</type>
                <loader readonly="yes" type="pflash" secure="no">/usr/share/AAVMF/AAVMF_CODE.fd</loader>
                <nvram>/var/lib/one//datastores/0/3/testy_VARS.fd</nvram>
        </os>
        <devices>
                <emulator><![CDATA[/usr/bin/qemu-kvm-one]]></emulator>
                <disk type='file' device='disk'>
                        <source file='/var/lib/one//datastores/0/3/disk.0.snap/0'/>
                        <target dev='vda' bus='virtio'/>
                        <boot order='1'/>
                        <driver name='qemu' type='qcow2' cache='none' discard='unmap'/>
                </disk>
                <disk type='file' device='cdrom'>
                        <source file='/var/lib/one//datastores/0/3/disk.1'/>
                        <target dev='sda'/>
                        <readonly/>
                        <driver name='qemu' type='raw'/>
                </disk>
                <controller type='scsi' index='0' model='virtio-scsi'>
                        <driver queues='1'/>
                </controller>
                <interface type='bridge'>
                        <source bridge='minionebr'/>
                        <mac address='02:00:ac:10:64:02'/>
                        <target dev='one-3-0'/>
                        <model type='virtio'/>
                </interface>
                <graphics type='vnc' listen='0.0.0.0' port='5903'/>
        </devices>
        <devices>
                <controller index='0' type='pci' model='pcie-root'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-root-port'/>
                <controller type='pci' model='pcie-to-pci-bridge'/>
        </devices>
        <features>
                <acpi/>
        </features>
        <devices>
                <channel type='unix'>
                        <source mode='bind'/><target type='virtio' name='org.qemu.guest_agent.0'/>
                </channel>
        </devices>
        <devices><input type='keyboard' bus='virtio'/></devices>
        <metadata>
                <one:vm xmlns:one="http://opennebula.org/xmlns/libvirt/1.0">
                        <one:system_datastore><![CDATA[/var/lib/one//datastores/0/3]]></one:system_datastore>
                        <one:name><![CDATA[testy]]></one:name>
                        <one:uname><![CDATA[oneadmin]]></one:uname>
                        <one:uid>0</one:uid>
                        <one:gname><![CDATA[oneadmin]]></one:gname>
                        <one:gid>0</one:gid>
                        <one:opennebula_version>7.0.0</one:opennebula_version>
                        <one:stime>1752096998</one:stime>
                        <one:deployment_time>1752097004</one:deployment_time>
                </one:vm>
        </metadata>
</domain>


Versions of the related components and OS (frontend, hypervisors, VMs):

Steps to reproduce:

Current results:

Expected results:

The problem appears to be a missing firmware file:

In OpenNebula 7.0 there are 2 methods to specify the firmware for the VMs. See OS section in the template definition

The UEFI method should make libvirt select the available firmware in the hypervisor. If this is failing in your case, you can specify the full path, like

OS = [ FIRMWARE = "..."

If you need to use a custom path, don’t forget to add it in the KVM configuration file and restart OpenNebula.

Cheers!

Hi @ruben

Thanks for your reply. I checked the configuration and the file /etc/one/vmm_exec/vmm_exec_kvm.conf has the following config entry

OS=[
  ARCH="aarch64",
  FIRMWARE="/usr/share/AAVMF/AAVMF_CODE.ms.fd",
  FIRMWARE_SECURE="no",
  MACHINE="virt-7.2"
]

and the OS template (from the marketplacee) has the following entries:

I’m not 100% certain where this reference to ‘/usr/share/OVMF/OVMF_VARS.fd’ comes from. Is there some other configuration that I should focus on?

Sorry for the stupid questions. Opennebula is very new to me and my kvm/libvirt knowhow could also benefit from a boost.

Cheers,
Marc

My bad @marc I’ve just stopped reading after the first warning (which actually should not be a problem). The actual error is this one:

So if the processor does not have virtualization extensions probably the best option is to use LXC hypervisor

Cheers

Ah, if it is actually supported try to setup the CPU model as passthrough

Thank you very much @ruben for your support. Changing the CPU Type to “host-passthrough” and the hypervisor to “qemu” in the VM template worked like a charm.

For anybody else encountering this problem, here is the template configuration:

{
  "CONTEXT": {
    "NETWORK": "YES",
    "SSH_PUBLIC_KEY": "$USER[SSH_PUBLIC_KEY]"
  },
  "CPU": "1",
  "CPU_MODEL": {
    "MODEL": "host-passthrough"
  },
  "DISK": {
    "IMAGE_ID": "3"
  },
  "GRAPHICS": {
    "LISTEN": "0.0.0.0",
    "TYPE": "vnc"
  },
  "HYPERVISOR": "qemu",
  "LOGO": "images/logos/fedora.png",
  "LXD_SECURITY_PRIVILEGED": "true",
  "MEMORY": "768",
  "OS": {
    "ARCH": "aarch64",
    "FIRMWARE": "/usr/share/AAVMF/AAVMF_CODE.fd",
    "FIRMWARE_SECURE": "no"
  },
  "SCHED_REQUIREMENTS": "HYPERVISOR=qemu & ARCH=aarch64"
}

The relevant sections from /etc/one/vmm/vmm_exec_kvm.conf:

OS=[
  ARCH="aarch64",
  FIRMWARE="/usr/share/AAVMF/AAVMF_CODE.ms.fd",
  FIRMWARE_SECURE="no",
  MACHINE="host-passthrough"
]

CPU = [
  CPU_MODEL="host-passthrough"
]

Great stuff and awesome community, thanks again!

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.