LXD - Cannot terminate container

While trying to terminate (or terminate hard) getting:

Thu Nov 4 10:59:47 2021 [Z0][VM][I]: New LCM state is SHUTDOWN
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/cancel 'one-2479' 'zdh-004' 2479 zdh-004
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Processing disk 0
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Using raw filesystem mapper for /var/lib/one/datastores/107/2479/disk.0
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: shutdown: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/shutdown:50:in `<main>': Failed to dismantle container storage (RuntimeError)
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: ExitCode: 1
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: Failed to execute virtualization driver operation: cancel.
Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: Error canceling VM
Thu Nov 4 10:59:49 2021 [Z0][VM][I]: New LCM state is RUNNING
Thu Nov 4 10:59:49 2021 [Z0][LCM][I]: Fail to shutdown VM. Assuming that the VM is still RUNNING.

It seems like there is no mount point anymore:

root@zdh-004:~# lsblk | grep 2479
root@zdh-004:~#

How can I get this properly removed?

Thanks.


OpenNebula: 5.12.0.3
LXD: 3.0.4

Find some additional information in /var/log/syslog:

Nov  4 09:45:46 zdh-004 systemd[1]: var-snap-lxd-common-lxd-storage\x2dpools-default-containers-one\x2d2479-rootfs.mount: Succeeded.
Nov  4 09:45:46 zdh-004 systemd[839143]: var-snap-lxd-common-lxd-storage\x2dpools-default-containers-one\x2d2479-rootfs.mount: Succeeded.
Nov  4 09:45:46 zdh-004 systemd[516393]: var-snap-lxd-common-lxd-storage\x2dpools-default-containers-one\x2d2479-rootfs.mount: Succeeded.
Nov  4 09:45:46 zdh-004 systemd[1]: var-lib-one-datastores-107-2479-mapper-disk.1.mount: Succeeded.
Nov  4 09:45:46 zdh-004 systemd[516393]: var-lib-one-datastores-107-2479-mapper-disk.1.mount: Succeeded.
Nov  4 09:45:46 zdh-004 systemd[839143]: var-lib-one-datastores-107-2479-mapper-disk.1.mount: Succeeded.
Nov  4 09:45:52 zdh-004 lxd.daemon[2348333]: t=2021-11-04T10:45:52+0100 lvl=eror msg="Failed deleting container storage" err="error removing /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479: rm: cannot remove '/var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs': Device or resource busy\n" name=one-2479
Nov  4 09:45:52 zdh-004 systemd[1]: Started snap.lxd.lxc.25b9b2d1-c423-41c4-a386-b3533bf13589.scope.
Nov  4 09:45:53 zdh-004 systemd[1]: snap.lxd.lxc.25b9b2d1-c423-41c4-a386-b3533bf13589.scope: Succeeded.

However, lsof doesn’t show anything:

root@zdh-004:~# lsof +D /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs/
root@zdh-004:~#

Hello

Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: shutdown: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/shutdown:50:in `': Failed to dismantle container storage (RuntimeError)

This suggests that there is no mounted device on that path. You can inspect the status of the block devices mapping on the virtualization node by issuing an lsblk command.

For proper cleanup you’d require

  • unmounting (umount command)
  • unmapping
  • deleting the container (lxc delete)

@dclavijo thank you for getting back.

I checked lsblk already and I don’t see this block mapped:

root@zdh-004:~# lsblk | grep 2479
root@zdh-004:~#

root@zdh-004:~# mount | grep mapper | grep 2479
root@zdh-004:~#

Could you please specify how to unmap the device?

Thanks!

@dclavijo I was able to find the reason and executed:
lxc delete one-2479

However, now I’m not able to terminate (hard) container from GUI/CLI:

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/shutdown 'one-2479' 'zdh-004' 2479 zdh-004

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 /var/tmp/one/vmm/lxd/client.rb:144:in `get_response': {"error"=>"not found", "error_code"=>404, "type"=>"error"} (LXDError)

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 from /var/tmp/one/vmm/lxd/client.rb:63:in `get'

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 from /var/tmp/one/vmm/lxd/container.rb:86:in `get'

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 from /var/tmp/one/vmm/lxd/shutdown:34:in `<main>'

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 ExitCode: 1

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: LOG I 2479 Failed to execute virtualization driver operation: shutdown.

Thu Nov  4 18:09:58 2021 [Z0][VMM][D]: Message received: SHUTDOWN FAILURE 2479 -

It says “not found”

The driver code is trying to get the information about the container backing the VM you want to terminate. but since you issued a lxd level delete call lxc delete one-2479 then you get not found response from LXD.

In order to delete the VM from OpenNebula you should issue onevm recover --delete 2479