While trying to terminate (or terminate hard) getting:
Thu Nov 4 10:59:47 2021 [Z0][VM][I]: New LCM state is SHUTDOWN
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/cancel 'one-2479' 'zdh-004' 2479 zdh-004
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Processing disk 0
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Using raw filesystem mapper for /var/lib/one/datastores/107/2479/disk.0
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: shutdown: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: shutdown: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/shutdown:50:in `<main>': Failed to dismantle container storage (RuntimeError)
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: ExitCode: 1
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: Failed to execute virtualization driver operation: cancel.
Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: Error canceling VM
Thu Nov 4 10:59:49 2021 [Z0][VM][I]: New LCM state is RUNNING
Thu Nov 4 10:59:49 2021 [Z0][LCM][I]: Fail to shutdown VM. Assuming that the VM is still RUNNING.
Thu Nov 4 10:59:49 2021 [Z0][VMM][E]: shutdown: Cannot detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-2479/rootfs
Thu Nov 4 10:59:49 2021 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/shutdown:50:in `': Failed to dismantle container storage (RuntimeError)
This suggests that there is no mounted device on that path. You can inspect the status of the block devices mapping on the virtualization node by issuing an lsblk command.
The driver code is trying to get the information about the container backing the VM you want to terminate. but since you issued a lxd level delete call lxc delete one-2479 then you get not found response from LXD.
In order to delete the VM from OpenNebula you should issue onevm recover --delete 2479