Everything works for first container with version 3.0, but the documentation needs to be corrected - system users are not telepathic in order to know about these nuances
The first server starts normally, and the second, third server - BOOT_FAILURE
Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: mount_dev: mount: /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs: wrong fs type, bad option, bad superblock on /dev/nbd4p1, missing codepage or helper program, or other error.
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Processing disk 0
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/0/5/disk.0
Wed Sep 30 08:43:07 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs
Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs
Wed Sep 30 08:43:07 2020 [Z0][VMM][E]: deploy: failed to dismantle container storage
So all the same, why does everything execute from the directory /var/tmp/one ?
The performance counters (cpu/hdd/memory) for the running container are 0 !
I launch the VNC console for the container in a separate window, launch the new VNC console for the container, and so on. Each time the console is as for a separate container (no history, new screen, running commands continue to be executed, but it is impossible to return to them)
There is a note on the LXD node installation documentation about this
/var/snap/lxd/common/lxd/storage-pools/default/containers/one-5/rootfs: wrong fs type, bad option, bad superblock on /dev/nbd4p1, missing codepage or helper program, or other error.
There seems to be an issue with the image, the host is failing to mount the partition /dev/nbd4p1, disk.0 is mapped to /dev/nbd1 and cannot be mounted.
So all the same, why does everything execute from the directory /var/tmp/one ?
The hypervisor drivers are synced from the frontend in /var/lib/one/remotes to /var/tmp/one
I launch the VNC console for the container in a separate window, launch the new VNC console for the container, and so on. Each time the console is as for a separate container (no history, new screen, running commands continue to be executed, but it is impossible to return to them)
I did not install this package - I can assume that ubuntu 20.04 installed it itself with minimal installation. I advise you to mark this in the documentation.
About the error BOOT_FAILURE - The first server starts normally! So the image is most likely correct, but it does not give a partition for the second image
About /var/tmp/one - When does sync take place? I thought /var/tmp directory is cleared periodically
I may not well understand the difference between KVM and LHD. I previously only used docker as a platform for containers and I have little experience with LHD containers. The essence is as follows:
I launch the VNC console in the lxc container, enter some command, click “Open in a separate browser window”. The console opens screen in a separate window, but the screen is blank as after executing the clear command.The launched command (for example, stress-ng) continues to execute inside the container, but is not displayed on new VNC console screen. If from the main browser window I launch another VNC console, then the situation will repeat itself - each VNC works as a separate TTY terminal to LXD container. I do not understand how to connect to the previous screen (to the running stress-ng command).