Broken install on "minione --verbose --lxd"

Hi
I try install minione on single host by ./minione --verbose --lxd
But deploy failed :

deploy: To start your first instance
Error: not found

And also, Why does everything start from /var/tmp/one ?

minione-lxd.log (4.3 KB)
minione-lxd-first-LXD.log (2.3 KB)

Can you tell us the LXD version installed in your minione node ?

lxd --version

root@minione:~# lxd --version
4.6

Version 4 is not supported currently. You need to install version 3.0.x. Did you have lxd installed prior to running the minione install ?

No. I install only Ubuntu 20.04 and update it
My histoty:

passwd
vi /etc/default/grub
update-grub
init 6
apt update
apt upgrade
apt install -y mc
mc
systemctl restart ssh
init 6
blkid | awk -F\" '/swap/ {print $2}'
printf "RESUME=UUID=$(blkid | awk -F\" '/swap/ {print $2}')\n" | sudo tee /etc/initramfs-tools/conf.d/resume
update-initramfs -u -k all
init 6
wget 'https://github.com/OpenNebula/minione/releases/latest/download/minione'
chmod 777 minione
./minione --verbose --lxd

Everything works for first container with version 3.0, but the documentation needs to be corrected - system users are not telepathic in order to know about these nuances

  1. The first server starts normally, and the second, third server - BOOT_FAILURE

Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: mount_dev: mount: /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs: wrong fs type, bad option, bad superblock on /dev/nbd4p1, missing codepage or helper program, or other error.
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Processing disk 0
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/0/5/disk.0
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs
Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs
Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: failed to dismantle container storage

  1. So all the same, why does everything execute from the directory /var/tmp/one ?

  2. The performance counters (cpu/hdd/memory) for the running container are 0 !

  3. I launch the VNC console for the container in a separate window, launch the new VNC console for the container, and so on. Each time the console is as for a separate container (no history, new screen, running commands continue to be executed, but it is impossible to return to them)

There is a note on the LXD node installation documentation about this

/var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs: wrong fs type, bad option, bad superblock on /dev/nbd4p1, missing codepage or helper program, or other error.

There seems to be an issue with the image, the host is failing to mount the partition /dev/nbd4p1, disk.0 is mapped to /dev/nbd1 and cannot be mounted.

So all the same, why does everything execute from the directory /var/tmp/one ?

The hypervisor drivers are synced from the frontend in /var/lib/one/remotes to /var/tmp/one

I launch the VNC console for the container in a separate window, launch the new VNC console for the container, and so on. Each time the console is as for a separate container (no history, new screen, running commands continue to be executed, but it is impossible to return to them)

Can you elaborate more on this ?

I did not install this package - I can assume that ubuntu 20.04 installed it itself with minimal installation. I advise you to mark this in the documentation.

  1. About the error BOOT_FAILURE - The first server starts normally! So the image is most likely correct, but it does not give a partition for the second image

  2. About /var/tmp/one - When does sync take place? I thought /var/tmp directory is cleared periodically

  3. I may not well understand the difference between KVM and LHD. I previously only used docker as a platform for containers and I have little experience with LHD containers. The essence is as follows:
    I launch the VNC console in the lxc container, enter some command, click “Open in a separate browser window”. The console opens screen in a separate window, but the screen is blank as after executing the clear command.The launched command (for example, stress-ng) continues to execute inside the container, but is not displayed on new VNC console screen. If from the main browser window I launch another VNC console, then the situation will repeat itself - each VNC works as a separate TTY terminal to LXD container. I do not understand how to connect to the previous screen (to the running stress-ng command).