Problem creating a new VM with network adapter

Hello,

I am currently creating a test environment of an existing productive cloud. The purpose is to evaluate troubles in the upgrade process to a newer version. For that reason I am using older operating system and OpenNebula versions, as they are also used on our real world cloud. I have one controller and two nodes. The nodes have a Glusterfs volume that is mounted to the controller to /var/lib/one. DEBUG_LEVEL is set to 3 in /etc/one/oned.conf.

I can create a VM without a network adapter with no problem, but as soon as I try to create a VM with a network adapter I am receiving the message

DEPLOY: Could not create domain from /var/lib/one//datastores/0/19/deployment.0

The log file /var/log/one/oned.log outputs the following:

Wed Mar 26 10:29:35 2025 [Z0][ReM][D]: Req:7664 UID:0 IP:127.0.0.1 one.vm.deploy invoked , 21, 2, false, 0, ""
Wed Mar 26 10:29:35 2025 [Z0][DiM][D]: Deploying VM 21
Wed Mar 26 10:29:35 2025 [Z0][ReM][D]: Req:7664 UID:0 one.vm.deploy result SUCCESS, 21
Wed Mar 26 10:29:36 2025 [Z0][DBM][I]: Purging obsolete LogDB records: 0 records purged. Log state: 0,0 - 0,0
Wed Mar 26 10:29:36 2025 [Z0][DBM][I]: Purging obsolete federated LogDB records: 0 records purged. Federated log size: 0.
Wed Mar 26 10:29:36 2025 [Z0][ReM][D]: Req:3904 UID:0 IP:127.0.0.1 one.vm.info invoked , 21, false
Wed Mar 26 10:29:36 2025 [Z0][ReM][D]: Req:3904 UID:0 one.vm.info result SUCCESS, "<VM><ID>21</ID><UID>..."
Wed Mar 26 10:29:37 2025 [Z0][TrM][D]: Message received: TRANSFER SUCCESS 21 -

Wed Mar 26 10:29:37 2025 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/mkdir -p.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/0/21/vm.xml.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/0/21/ds.xml.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy '/var/lib/one//datastores/0/21/deployment.0' '10.0.0.24' 21 10.0.0.24
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: error: Disconnected from qemu:///system due to end of file
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: error: Failed to create domain from /var/lib/one//datastores/0/21/deployment.0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: error: End of file while reading data: Input/output error
Wed Mar 26 10:29:38 2025 [Z0][VMM][E]: Could not create domain from /var/lib/one//datastores/0/21/deployment.0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 255
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: clean: Executed "sudo -n ovs-vsctl --if-exists del-port ovs1 one-21-0".
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: ExitCode: 0
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Wed Mar 26 10:29:38 2025 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Wed Mar 26 10:29:38 2025 [Z0][IPM][D]: Message received: DEPLOY FAILURE 21 Could not create domain from /var/lib/one//datastores/0/21/deployment.0

Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:816 UID:0 IP:127.0.0.1 one.zone.raftstatus invoked 
Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:816 UID:0 one.zone.raftstatus result SUCCESS, "<RAFT><SERVER_ID>-1<..."
Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:1856 UID:0 IP:127.0.0.1 one.vmpool.infoextended invoked , -2, -1, -1, -1
Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:1856 UID:0 one.vmpool.infoextended result SUCCESS, "<VM_POOL><VM><ID>21<..."
Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:1616 UID:0 IP:127.0.0.1 one.vmpool.infoextended invoked , -2, -1, -1, -1
Wed Mar 26 10:29:51 2025 [Z0][ReM][D]: Req:1616 UID:0 one.vmpool.infoextended result SUCCESS, "<VM_POOL><VM><ID>21<..."
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:1600 UID:0 IP:127.0.0.1 one.zone.raftstatus invoked 
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:1600 UID:0 one.zone.raftstatus result SUCCESS, "<RAFT><SERVER_ID>-1<..."
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:5952 UID:0 IP:127.0.0.1 one.vmpool.infoextended invoked , -2, -1, -1, -1
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:5952 UID:0 one.vmpool.infoextended result SUCCESS, "<VM_POOL><VM><ID>21<..."
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:1792 UID:0 IP:127.0.0.1 one.vmpool.infoextended invoked , -2, -1, -1, -1
Wed Mar 26 10:30:06 2025 [Z0][ReM][D]: Req:1792 UID:0 one.vmpool.infoextended result SUCCESS, "<VM_POOL><VM><ID>21<..."

In /var/log/one/sched.log I could find:


Wed Mar 26 10:29:35 2025 [Z0][SCHED][D]: Dispatching VMs to hosts:
	VMID	Priority	Host	System DS
	--------------------------------------------------------------
	21	0		2	0

The datastores by themselves shown by onedatastore list look good. I already fixed the known issue with a system datastore with unknown size percentage in the past.

  ID NAME                                                              SIZE AVA CLUSTERS IMAGES TYPE DS      TM      STAT
   2 files                                                              20G 97% 0             0 fil  fs      ssh     on  
   1 default                                                            20G 97% 0             1 img  fs      qcow2   on  
   0 system                                                             20G 97% 0             0 sys  -       qcow2   on

The network adapter is planned to be bridged with Open vSwitch to a real adapter. Configuration looks like:

BRIDGE = "ovs1"
DESCRIPTION = "Connect the VM to a local network with internet access through a NAT"
DNS = "XXX.XXX.XXX.XXX"
FILTER_IP_SPOOFING = "YES"
FILTER_MAC_SPOOFING = "YES"
GATEWAY = "10.10.10.1"
NETWORK_ADDRESS = "10.10.10.0"
NETWORK_MASK = "255.255.254.0"
OUTER_VLAN_ID = ""
PHYDEV = ""
SECURITY_GROUPS = "0"
VLAN_ID = "2"
VN_MAD = "ovswitch"

When I connect via SSH to one of the nodes, I can see that the ovs1 interface exists by entering ip a in the shell:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: sfp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:2e:07:26 brd ff:ff:ff:ff:ff:ff
    inet XX.XX.XX.XX/23 brd 10.0.1.255 scope global sfp0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe2e:726/64 scope link 
       valid_lft forever preferred_lft forever
3: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:45:e5:bb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe45:e5bb/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 92:17:23:b0:a0:22 brd ff:ff:ff:ff:ff:ff
5: ovs1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 08:00:27:45:e5:bb brd ff:ff:ff:ff:ff:ff
6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ea:5a:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
7: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:ea:5a:46 brd ff:ff:ff:ff:ff:ff

I get additional information with sudo ovs-vsctl show:

dbdae4bb-0000-4858-802e-15522ffd0000
    Bridge ovs1
        Port ovs1
            Interface ovs1
                type: internal
        Port net1
            Interface net1
    ovs_version: "2.13.8"

I am using the following software versions:

Operating system: Ubuntu 20.04.6 LTS
OpenNebula version: 6.0.0.3

Thank you in advance

Hello @NewVirtualMachine,

It looks like your OpenNebula version is very old, we are not supporting that version anymore, and it might contain bugs and issues already fixed on our latest version.

Let me know if you can’t upgrade the version, and also, I think it’s better to use a newer version of Ubuntu (ideally 22, or 24)

Cheers,

@FrancJP I just wrote at the very beginning of my thread that I am creating a test environment of a working productive cloud to evaluate possible update issues. The whole purpose of the test environment is to upgrade it and the underlying operating system to a newer version. If I can’t get the old version running and doing the tests, I can’t upgrade the productive cloud. So it doesn’t help me at all to get the tip going to a newer version, as this is the whole purpose of this.

Our productive environment is really complex. The upgrade procedure will fail without a good evaluation, so the test environment needs to run.

Similar error message is mentioned in another thread on that forum.

Have you double checked the virtualization on your hypervisor nodes is enabled? If your hypervisor nodes are VMs then one needs to enable nested virtualization.

You can also check the libvirt error messages (journalctl -xeu libvirtd.service) and/or libvirt VM log file (its path varies depending on the distro).