[SOLVED] Not able to Start a VM [LXD][5.12]

Hi everyone, I’m new to Open Nebula and i find the DC solution very appealing.
So, I’m learning how to setup Open Nebula on two new servers for the moment before scaling it to the rest of the infrastructure. I’ve managed to setup the front end and a back end node on the first one and the back end node on the second one. both can communicate through password less ssh and are in the same cluster. they have not been added to the DNS yet do that’s why i use the IP for the node 2 and local-host for node 1. there are no vnetwork configured but they are one the same network. Here is the error when I try instantiate the ubuntu_bionic - LXD template from the Linux Containers with default setup ( 5G for the image drive).

BlockquoteThu Jul 23 09:23:24 2020 [Z0][VM][I]: New state is ACTIVE
Thu Jul 23 09:23:24 2020 [Z0][VM][I]: New LCM state is PROLOG
Thu Jul 23 09:23:30 2020 [Z0][VM][I]: New LCM state is BOOT
Thu Jul 23 09:23:30 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/3/deployment.0
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy ‘/var/lib/one//datastores/0/3/deployment.0’ ‘localhost’ 3 localhost
Thu Jul 23 09:23:31 2020 [Z0][VMM][E]: deploy: Error: not found
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/client.rb:102:in wait': {"type"=>"sync", "status"=>"Success", "status_code"=>200, "operation"=>"", "error_code"=>0, "error"=>"", "metadata"=>{"id"=>"77822c81-e188-4016-bb4b-bbc2667b810c", "class"=>"task", "description"=>"Creating container", "created_at"=>"2020-07-23T09:23:31.779215466Z", "updated_at"=>"2020-07-23T09:23:31.779215466Z", "status"=>"Failure", "status_code"=>400, "resources"=>{"containers"=>["/1.0/containers/one-3"], "instances"=>["/1.0/instances/one-3"]}, "metadata"=>nil, "may_cancel"=>false, "err"=>"Invalid devices: Device validation failed \\"context\\": Missing source \\"/var/lib/one/datastores/0/3/mapper/disk.1\\" for disk \\"context\\"", "location"=>"none"}} (LXDError) Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:517:in wait?’
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:135:in create' Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/deploy:52:in
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: ExitCode: 1
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Thu Jul 23 09:23:31 2020 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Thu Jul 23 09:23:31 2020 [Z0][VMM][E]: Error deploying virtual machine
Thu Jul 23 09:23:31 2020 [Z0][VM][I]: New LCM state is BOOT_FAILURE

Here it’s my setup process (i’m french so there’s a bit of french inside).

http://docs.opennebula.io

Front end install
fresh install of ubuntu 20.04
$ sudo apt update

$ sudo apt upgrade

$ sudo su

wget -q -O- https://downloads.opennebula.io/repo/repo.key | apt-key add

echo “deb Index of /repo/5.12/Ubuntu/20.04 stable opennebula” > /etc/apt/sources.list.d/opennebula.list

apt update

apt-get install opennebula opennebula-sunstone opennebula-gate opennebula-flow

su - oneadmin

oneadmin$ mkdir ~/.one

oneadmin$ echo “oneadmin:oneadmin” > ~/.one/one_auth

oneadmin$ exit

exit

$ sudo systemctl start opennebula opennebula-sunstone

le front end est prêt à l’adresse http://localhost:9869

Node install
$ sudo su
# wget -q -O- https://downloads.opennebula.io/repo/repo.key | apt-key add
# echo “deb Index of /repo/5.12/Ubuntu/20.04 stable opennebula” > /etc/apt/sources.list.d/opennebula.list
# sudo apt-get install opennebula-node opennebula-node-lxd rbd-nbd
# exit

password less ssh install :

node side
$ sudo su
# passwd oneadmin
New password:
Retype new password:
passwd: password updated successfully

front end side
oneadmin$ ssh-keyscan 192.168.0.250 192.168.0.251 >> /var/lib/one/.ssh/known_hosts

oneadmin$ ssh-copy-id -i /var/lib/one/.ssh/id_rsa.pub 192.168.0.251

Thank you for your help
the text file of the setup process
opennebula setup (1.3 KB)

I’m pretty sure the problem comes from the context drive but can’t manage to understand what it is for and where can I delete it if it’s not important. I’ve also tried to create the mapper folder and cp the disk.1 or create a symbolic link but it doesn’t work. here is the error when the file is were it is supposed to be.

Blockquote
Thu Jul 23 12:01:57 2020 [Z0][VM][I]: New LCM state is BOOT
Thu Jul 23 12:01:57 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/3/deployment.0
Thu Jul 23 12:01:57 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Thu Jul 23 12:01:57 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy ‘/var/lib/one//datastores/0/3/deployment.0’ ‘localhost’ 3 localhost
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Overriding container
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Processing disk 0
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Using raw filesystem mapper for /var/lib/one/datastores/0/3/disk.0
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Mapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-3/rootfs using device /dev/loop6
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Resizing filesystem ext4 on /dev/loop6
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Mounting /dev/loop6 at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-3/rootfs
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/one/datastores/0/3/mapper/disk.1 using device /dev/loop8
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Mounting /dev/loop8 at /var/lib/one/datastores/0/3/mapper/disk.1
Thu Jul 23 12:01:59 2020 [Z0][VMM][E]: deploy: mkdir_safe: mkdir: cannot create directory ‘/var/lib/one/datastores/0/3/mapper/disk.1’: File exists
Thu Jul 23 12:01:59 2020 [Z0][VMM][E]: deploy: mount_dev: mount: /var/lib/one/datastores/0/3/mapper/disk.1: mount point is not a directory.
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Processing disk 0
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Using raw filesystem mapper for /var/lib/one/datastores/0/3/disk.0
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-3/rootfs
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Umounting disk mapped at /dev/loop6
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/lib/one/datastores/0/3/mapper/disk.1
Thu Jul 23 12:01:59 2020 [Z0][VMM][E]: deploy: Cannot detect block device from /var/lib/one/datastores/0/3/mapper/disk.1
Thu Jul 23 12:01:59 2020 [Z0][VMM][E]: deploy: failed to dismantle container storage
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/deploy:64:in `': failed to setup container storage (RuntimeError)
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: ExitCode: 1
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Thu Jul 23 12:01:59 2020 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Thu Jul 23 12:01:59 2020 [Z0][VMM][E]: Error deploying virtual machine
Thu Jul 23 12:01:59 2020 [Z0][VM][I]: New LCM state is BOOT_FAILURE

I precise that with minione it was working fine but it’s not made for a real use.

I know that currently in the doc it says that Ubuntu >= 19.04 supports LXD and I’m on 20.04 but i don’t know if it’s the issue.

I’ve found where to edit the template but it still doesn’t work.

And I’ve tried with ubuntu 18.04 it’s the same issue.

1 Like

Ok so after a good night of sleep i’ve tried to create the template myself like in the conference

here is the template from the market place which didn’t work for me . and I’ll be glad if someone can explain me why :sweat_smile:

Blockquote
HYPERVISOR = “lxd”
SCHED_REQUIREMENTS = “HYPERVISOR="lxd"”
CPU = “1”
MEMORY = “768”
LXD_SECURITY_PRIVILEGED = “true”
GRAPHICS = [
LISTEN =“0.0.0.0”,
TYPE =“vnc”
]
CONTEXT = [
NETWORK =“YES”,
SSH_PUBLIC_KEY =“$USER[SSH_PUBLIC_KEY]”,
SET_HOSTNAME =“$NAME”
]"

And here is mine

Blockquote
CONTEXT = [
NETWORK = “YES”,
PASSWORD = “iiRIxZniZbahN7X+JdLzXA==”,
REPORT_READY = “YES”,
SSH_PUBLIC_KEY = “it’s my public front end ssh key”,
START_SCRIPT_BASE64 = “c3VkbyBhcHQgdXBkYXRlCnN1ZG8gYXB0IHVwZ3JhZGU=”,
TOKEN = “YES” ]
CPU = “0.3”
DISK = [
IMAGE = “ubuntu_bionic - LXD”,
IMAGE_UNAME = “oneadmin” ]
GRAPHICS = [
LISTEN = “0.0.0.0”,
TYPE = “VNC” ]
HYPERVISOR = “lxd”
INPUTS_ORDER = “”
LOGO = “images/logos/ubuntu.png”
LXD_PROFILE = “”
LXD_SECURITY_NESTING = “no”
LXD_SECURITY_PRIVILEGED = “no”
MEMORY = “2048”
MEMORY_UNIT_COST = “MB”
NIC = [
NETWORK = “private”,
NETWORK_UNAME = “oneadmin”,
SECURITY_GROUPS = “0” ]
OS = [
BOOT = “” ]

And with my own template I’ve been asked to setup the onegate parameter in /etc/one/oned.conf and type onegate-server start in the CLI, so I don’t know if it’s what fixed the problem.

Hello @Mael_Chouteau

In your template you have set up the TOKEN in the context section, that’s why OpenNebula is going to ask to configure the OneGate, because it understands that you want to use it. The one downloaded from the Marketplace doesn’t have anything related with OneGate.

What failure did you get from the one from the Marketplace?

Best,
Álex.

Hello @ahuertas

I don’t remember having issues with the marketplace.

Blockquote
CONTEXT = [
NETWORK = “YES”,
PASSWORD = “l42GIPHcBv5v1h3zsgbaiA==”,
SSH_PUBLIC_KEY = “$USER[SSH_PUBLIC_KEY]” ]
CPU = “1”
DISK = [
IMAGE_ID = “1” ]
GRAPHICS = [
LISTEN = “0.0.0.0”,
PASSWD = “root”,
TYPE = “VNC” ]
HYPERVISOR = “kvm”
INFO = “Please do not use this VM Template for vCenter VMs. Refer to the documentation https://bit.ly/37NcJ0Y
INPUTS_ORDER = “”
LOGO = “images/logos/ubuntu.png”
LXD_SECURITY_PRIVILEGED = “true”
MEMORY = “2048”
MEMORY_UNIT_COST = “MB”
NIC = [
NETWORK = “réseau”,
NETWORK_UNAME = “oneadmin”,
SECURITY_GROUPS = “0” ]
OS = [
ARCH = “x86_64”,
BOOT = “” ]
SCHED_REQUIREMENTS = “HYPERVISOR!="vcenter"”

here is my current template for kvm, but lxd works too.