Unable to deploy/instantiate LXD app/ Linux vm

From front-end :

oneadmin@nebula0:~$ onedatastore show -x 0

0
0
0
oneadmin
oneadmin
system

<OWNER_U>1</OWNER_U>
<OWNER_M>1</OWNER_M>
<OWNER_A>0</OWNER_A>
<GROUP_U>1</GROUP_U>
<GROUP_M>0</GROUP_M>
<GROUP_A>0</GROUP_A>
<OTHER_U>0</OTHER_U>
<OTHER_M>0</OTHER_M>
<OTHER_A>0</OTHER_A>

<DS_MAD></DS_MAD>
<TM_MAD></TM_MAD>
<BASE_PATH></BASE_PATH>
1
<DISK_TYPE>0</DISK_TYPE>
0

0

<TOTAL_MB>0</TOTAL_MB>
<FREE_MB>0</FREE_MB>
<USED_MB>0</USED_MB>


<ALLOW_ORPHANS></ALLOW_ORPHANS>
<DISK_TYPE></DISK_TYPE>
<DS_MIGRATE></DS_MIGRATE>
<RESTRICTED_DIRS></RESTRICTED_DIRS>
<SAFE_DIRS></SAFE_DIRS>

<TM_MAD></TM_MAD>



oneadmin@nebula0:~$ onedatastore show -x 1

1
0
0
oneadmin
oneadmin
default

<OWNER_U>1</OWNER_U>
<OWNER_M>1</OWNER_M>
<OWNER_A>0</OWNER_A>
<GROUP_U>1</GROUP_U>
<GROUP_M>0</GROUP_M>
<GROUP_A>0</GROUP_A>
<OTHER_U>0</OTHER_U>
<OTHER_M>0</OTHER_M>
<OTHER_A>0</OTHER_A>

<DS_MAD></DS_MAD>
<TM_MAD></TM_MAD>
<BASE_PATH></BASE_PATH>
0
<DISK_TYPE>0</DISK_TYPE>
0

0

<TOTAL_MB>100278</TOTAL_MB>
<FREE_MB>90738</FREE_MB>
<USED_MB>4405</USED_MB>


<ALLOW_ORPHANS></ALLOW_ORPHANS>
<CLONE_TARGET></CLONE_TARGET>
<DISK_TYPE></DISK_TYPE>
<DS_MAD></DS_MAD>
<LN_TARGET></LN_TARGET>
<RESTRICTED_DIRS></RESTRICTED_DIRS>
<SAFE_DIRS></SAFE_DIRS>
<TM_MAD></TM_MAD>



oneadmin@nebula0:~$ onedatastore show -x 2

2
0
0
oneadmin
oneadmin
files

<OWNER_U>1</OWNER_U>
<OWNER_M>1</OWNER_M>
<OWNER_A>0</OWNER_A>
<GROUP_U>1</GROUP_U>
<GROUP_M>0</GROUP_M>
<GROUP_A>0</GROUP_A>
<OTHER_U>0</OTHER_U>
<OTHER_M>0</OTHER_M>
<OTHER_A>0</OTHER_A>

<DS_MAD></DS_MAD>
<TM_MAD></TM_MAD>
<BASE_PATH></BASE_PATH>
2
<DISK_TYPE>0</DISK_TYPE>
0

0

<TOTAL_MB>100278</TOTAL_MB>
<FREE_MB>90738</FREE_MB>
<USED_MB>4405</USED_MB>


<ALLOW_ORPHANS></ALLOW_ORPHANS>
<CLONE_TARGET></CLONE_TARGET>
<DS_MAD></DS_MAD>
<LN_TARGET></LN_TARGET>
<RESTRICTED_DIRS></RESTRICTED_DIRS>
<SAFE_DIRS></SAFE_DIRS>
<TM_MAD></TM_MAD>



oneadmin@nebula0:~$

First worker node on Ubuntu 18.04:

oneadmin@nebula1:~$ lxd --version
3.0.3
oneadmin@nebula1:~$

Breakthrough! With LXD worker node on 18.04, and a bog standard install, i can instantiate a VM
Debian Buster deployed fine this time

Tue May 5 18:10:52 2020 [Z0][VM][I]: New state is ACTIVE
Tue May 5 18:10:52 2020 [Z0][VM][I]: New LCM state is PROLOG
Tue May 5 18:12:37 2020 [Z0][VM][I]: New LCM state is BOOT
Tue May 5 18:12:37 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/0/deployment.0
Tue May 5 18:12:40 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Tue May 5 18:12:40 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Processing disk 0
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Using raw filesystem mapper for /var/lib/one/datastores/0/0/disk.0
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/lxd/storage-pools/default/containers/one-0/rootfs using device /dev/loop1
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Resizing filesystem ext4 on /dev/loop1
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Mounting /dev/loop1 at /var/lib/lxd/storage-pools/default/containers/one-0/rootfs
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/one/datastores/0/0/mapper/disk.1 using device /dev/loop2
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: Mounting /dev/loop2 at /var/lib/one/datastores/0/0/mapper/disk.1
Tue May 5 18:12:47 2020 [Z0][VMM][I]: deploy: — Starting container —
Tue May 5 18:12:47 2020 [Z0][VMM][I]: ExitCode: 0
Tue May 5 18:12:47 2020 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Tue May 5 18:12:47 2020 [Z0][VMM][I]: Successfully execute network driver operation: post.
Tue May 5 18:12:47 2020 [Z0][VM][I]: New LCM state is RUNNING

Glad you made it work, now you can try to make fancy NFS setups, it shouldn’t be a problem with the right LXD version.

If only i can get basic vm networking working… not working yet. I lost all contact to my lxd worker node in the middle of it. I have to say the documentation for bridge networking is NOT great.
And, contextualization with ssh keys… very confusing still but that might be because no networking is working

Here is my lxd worker node.

oneadmin@nebula1:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether f0:76:1c:8f:43:3d brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 4c:bb:58:8e:77:85 brd ff:ff:ff:ff:ff:ff
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:78:65:fa brd ff:ff:ff:ff:ff:ff
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:78:65:fa brd ff:ff:ff:ff:ff:ff

and

oneadmin@nebula1:~$ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether f0:76:1c:8f:43:3d brd ff:ff:ff:ff:ff:ff
inet 192.168.254.181/24 brd 192.168.254.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::f276:1cff:fe8f:433d/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 4c:bb:58:8e:77:85 brd ff:ff:ff:ff:ff:ff
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:78:65:fa brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:78:65:fa brd ff:ff:ff:ff:ff:ff

I want my lxd machines to get bridged on the lab network with the range 192.168.254.190-195
Which of these names/nics to use where in the Create Bridge window?

@sandude Can you guide me. I have got the same issue. Is this the issue of the lxd version? I have setup my frontend and lxd node in ubuntu 20.04 VM. and the lxd version is 4.0.5.

Tue Apr 20 08:31:21 2021 [Z0][VM][I]: New state is ACTIVE
Tue Apr 20 08:31:24 2021 [Z0][VM][I]: New LCM state is PROLOG
Tue Apr 20 08:33:13 2021 [Z0][VM][I]: New LCM state is BOOT
Tue Apr 20 08:33:15 2021 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/36/deployment.0
Tue Apr 20 08:33:23 2021 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Tue Apr 20 08:33:23 2021 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy ‘/var/lib/one//datastores/0/36/deployment.0’ ‘51.81.168.228’ 36 51.81.168.228
Tue Apr 20 08:33:27 2021 [Z0][VMM][E]: deploy: Error: not found
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/client.rb:102:in wait': {"type"=>"sync", "status"=>"Success", "status_code"=>200, "operation"=>"", "error_code"=>0, "error"=>"", "metadata"=>{"id"=>"e19bc666-c06d-4a71-9755-96dbc51820ef", "class"=>"task", "description"=>"Creating instance", "created_at"=>"2021-04-20T07:33:26.651766729Z", "updated_at"=>"2021-04-20T07:33:26.651766729Z", "status"=>"Failure", "status_code"=>400, "resources"=>{"containers"=>["/1.0/containers/one-36"], "instances"=>["/1.0/instances/one-36"]}, "metadata"=>nil, "may_cancel"=>false, "err"=>"Failed creating instance record: Failed initialising instance: Failed to add device \\"context\\": Missing source path \\"/var/lib/one/datastores/0/36/mapper/disk.1\\" for disk \\"context\\"", "location"=>"none"}} (LXDError) Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:517:in wait?’
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:135:in create' Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/deploy:52:in
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: ExitCode: 1
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Tue Apr 20 08:33:27 2021 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Tue Apr 20 08:33:27 2021 [Z0][VMM][E]: Error deploying virtual machine
Tue Apr 20 08:33:27 2021 [Z0][VM][I]: New LCM state is BOOT_FAILURE
Tue Apr 20 09:32:02 2021 [Z0][VM][I]: New LCM state is CLEANUP_DELETE
Tue Apr 20 09:32:03 2021 [Z0][VM][I]: New state is DONE
Tue Apr 20 09:32:03 2021 [Z0][VM][I]: New LCM state is LCM_INIT

you need to use the lxd 3.0.x version

Thank you @dclavijo . It worked as I have used ubuntu 16.04 for the frontend and ubuntu 18.04 for the LXD node.