Error on creating VM (deploy : no block device)

Hi, i’ve got a strange problem when I try to create a VM on a fresh install of OpenNebula (Ubuntu 22.04 - ONE 6.6.0) with an LXC node (Ubuntu 22.04 - ONE 6.6.0 - LXC driver).

I can add my host on Sunstone, I can download an image (with template), but when I try to create a VM, I’ve got a failure with this message :

DEPLOY : INFO : deploy : No block device on /var/lib/one/datastores/0/2/mapper/disk.0

On the syslog file of my LXC node, I’ve got several errors.

During the installation of opennebula-node-lxc package

Feb 12 16:42:05 lxdtest systemd[1]: Reloaded Load AppArmor profiles.
Feb 12 16:42:05 lxdtest systemd-udevd[2274]: nbd0: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd0’ failed with exit code 1.
Feb 12 16:42:05 lxdtest systemd-udevd[2383]: nbd1: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd1’ failed with exit code 1.
Feb 12 16:42:05 lxdtest systemd-udevd[2387]: nbd2: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd2’ failed with exit code 1.
Feb 12 16:42:05 lxdtest systemd-udevd[2383]: nbd4: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd4’ failed with exit code 1.
Feb 12 16:42:05 lxdtest systemd-udevd[2275]: nbd3: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd3’ failed with exit code 1.

And during the creation of a VM

Feb 12 17:02:39 lxdtest systemd-udevd[9266]: nbd0p1: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd0p1’ failed with exit code 1.
Feb 12 17:02:39 lxdtest systemd-udevd[9266]: message repeated 2 times: [ nbd0p1: Process ‘/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/nbd0p1’ failed with exit code 1.]
Feb 12 17:02:40 lxdtest kernel: [ 1277.781131] block nbd0: NBD_DISCONNECT
Feb 12 17:02:40 lxdtest kernel: [ 1277.781148] block nbd0: Disconnected due to user request.
Feb 12 17:02:40 lxdtest kernel: [ 1277.781472] print_req_error: 178 callbacks suppressed
Feb 12 17:02:40 lxdtest kernel: [ 1277.781473] blk_update_request: I/O error, dev nbd0, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.781948] buffer_io_error: 182 callbacks suppressed
Feb 12 17:02:40 lxdtest kernel: [ 1277.781948] Buffer I/O error on dev nbd0, logical block 0, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.782198] blk_update_request: I/O error, dev nbd0, sector 1 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.782649] Buffer I/O error on dev nbd0, logical block 1, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.783360] blk_update_request: I/O error, dev nbd0, sector 2 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.783849] Buffer I/O error on dev nbd0, logical block 2, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.784091] blk_update_request: I/O error, dev nbd0, sector 3 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.784573] Buffer I/O error on dev nbd0, logical block 3, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.784823] blk_update_request: I/O error, dev nbd0, sector 4 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.785319] Buffer I/O error on dev nbd0, logical block 4, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.785574] blk_update_request: I/O error, dev nbd0, sector 5 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.786074] Buffer I/O error on dev nbd0, logical block 5, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.786331] blk_update_request: I/O error, dev nbd0, sector 6 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.786880] Buffer I/O error on dev nbd0, logical block 6, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.787326] blk_update_request: I/O error, dev nbd0, sector 7 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.787841] Buffer I/O error on dev nbd0, logical block 7, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.788170] blk_update_request: I/O error, dev nbd0, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.788739] Buffer I/O error on dev nbd0, logical block 0, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.789025] blk_update_request: I/O error, dev nbd0, sector 1 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.789587] Buffer I/O error on dev nbd0, logical block 1, async page read
Feb 12 17:02:40 lxdtest kernel: [ 1277.789903] ldm_validate_partition_table(): Disk read failed.
Feb 12 17:02:40 lxdtest kernel: [ 1277.789963] Dev nbd0: unable to read RDB block 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.790330] nbd0: unable to read partition table
Feb 12 17:02:40 lxdtest kernel: [ 1277.790574] ldm_validate_partition_table(): Disk read failed.
Feb 12 17:02:40 lxdtest kernel: [ 1277.790635] Dev nbd0: unable to read RDB block 0
Feb 12 17:02:40 lxdtest kernel: [ 1277.790994] nbd0: unable to read partition table

Any idea of what is the problem ? I’m on it for 2 days and I don’t have any clue…

Thanks !

I tried with Debian 11 instead of Ubuntu 22.04, same problem.
OpenNebula 6.4 instead of OpenNebula 6.6 : same problem.

May be because all my servers run under VirtualBox ?

Strange because I’ve got an old plateform with Debian 10 (Front End) / Ubuntu 18.04 with LXD (nodes) / OpenNebula 6.0 who works fine.

This is not really an error, is an information log entry that is written when the container storage auto-cleaner runs during the deployment phase (prior to actually creating the container).

Would you mind sharing

  • the full VM log
  • the VM Template
  • the container log at "/var/log/lxc/one-#{@vm_id}.log"

Thanks for your answer ! I spent again the whole day trying find a solution, without success.

No container log, the directory /var/log/lxc is empty on my node.

The VM log :

Mon Feb 13 16:34:30 2023 [Z0][VM][I]: New state is ACTIVE
Mon Feb 13 16:34:30 2023 [Z0][VM][I]: New LCM state is PROLOG
Mon Feb 13 16:34:32 2023 [Z0][VM][I]: New LCM state is BOOT
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/13/deployment.0
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: ExitCode: 0
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: ExitCode: 0
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/mkdir -p.
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: ExitCode: 0
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/0/13/vm.xml.
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: ExitCode: 0
Mon Feb 13 16:34:32 2023 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/0/13/ds.xml.
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Command execution fail (exit code: 255): cat << ‘EOT’ | /var/tmp/one/vmm/lxc/deploy ‘/var/lib/one//datastores/0/13/deployment.0’ ‘lxd1.ruche-cyril.sio’ 13 lxd1.ruche-cyril.sio
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: deploy: No block device on /var/lib/one/datastores/0/13/mapper/disk.0
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: deploy: No block device on /var/lib/one/datastores/0/13/mapper/disk.1
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command sudo lxc-ls
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command sudo -n qemu-nbd --fork -c /dev/nbd0 /var/lib/one/datastores/0/13/disk.0
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command lsblk -o NAME,FSTYPE
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command lsblk -o NAME,FSTYPE
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command lsblk -o NAME,FSTYPE
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command lsblk -o NAME,FSTYPE
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command sudo -n mount /dev/nbd0 /var/lib/one/datastores/0/13/mapper/disk.0
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: mount: /var/lib/one/datastores/0/13/mapper/disk.0: wrong fs type, bad option, bad superblock on /dev/nbd0, missing codepage or helper program, or other error.
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Running command sudo -n qemu-nbd -d /dev/nbd0
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: There was an error creating the containter.
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: ExitCode: 255
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: ExitCode: 0
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Mon Feb 13 16:34:33 2023 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Mon Feb 13 16:34:33 2023 [Z0][VMM][E]: DEPLOY: INFO: deploy: No block device on /var/lib/one/datastores/0/13/mapper/disk.0 INFO: deploy: No block device on /var/lib/one/datastores/0/13/mapper/disk.1 Running command sudo lxc-ls Running command sudo -n qemu-nbd --fork -c /dev/nbd0 /var/lib/one/datastores/0/13/disk.0 Running command lsblk -o NAME,FSTYPE Running command lsblk -o NAME,FSTYPE Running command lsblk -o NAME,FSTYPE Running command lsblk -o NAME,FSTYPE Running command sudo -n mount /dev/nbd0 /var/lib/one/datastores/0/13/mapper/disk.0 mount: /var/lib/one/datastores/0/13/mapper/disk.0: wrong fs type, bad option, bad superblock on /dev/nbd0, missing codepage or helper program, or other error. Running command sudo -n qemu-nbd -d /dev/nbd0 There was an error creating the containter. ExitCode: 255
Mon Feb 13 16:34:33 2023 [Z0][VM][I]: New LCM state is BOOT_FAILURE

The VM template

CONTEXT = [
NETWORK = “YES”,
PASSWORD = “gmER/kGqfMtsOYjas7CKbA==”,
SSH_PUBLIC_KEY = “$USER[SSH_PUBLIC_KEY]” ]
CPU = “0.25”
DISK = [
IMAGE = “Alpine 3.17”,
IMAGE_UNAME = “oneadmin” ]
GRAPHICS = [
LISTEN = “0.0.0.0”,
TYPE = “VNC” ]
HOT_RESIZE = [
CPU_HOT_ADD_ENABLED = “NO”,
MEMORY_HOT_ADD_ENABLED = “NO” ]
HYPERVISOR = “lxc”
LOGO = “images/logos/alpine.png”
LXC_UNPRIVILEGED = “YES”
LXD_SECURITY_PRIVILEGED = “true”
MEMORY = “128”
MEMORY_RESIZE_MODE = “BALLOONING”
MEMORY_UNIT_COST = “MB”
NIC = [
NETWORK = “Ruche”,
NETWORK_UNAME = “oneadmin”,
SECURITY_GROUPS = “0” ]
USER_INPUTS = [
CPU = “O|fixed|| |0.25”,
MEMORY = “O|fixed|| |128”,
VCPU = “O|range||2…8|4” ]
VCPU = "4

It’s an Alpine Image but with an Ubuntu image, I’ve got the same problem.

This looks to be the error. The LXC driver requires single partition images in order to operate. These images are mounted on the host FS, and that looks to be the failure here. Is the image

IMAGE = “Alpine 3.17”,
IMAGE_UNAME = “oneadmin” ]

the KVM virtual appliance from the marketplace ? If this is the case then the error is expected. If it is a single partition image, then try to manually inspect the VM disk0 to check out what is wrong with it.

This looks to be the error. The LXC driver requires single partition images in order to operate.

YES ! That was my problem !

This confirms that my level of English is really bad, I read several times this page without seeing that it concerned my problem.

By uploading an image from the Linux Container marketplace (it actually made sense…), it works without problem.

Thank you very mush for your answer, really !

1 Like

It can be a bit confusing (LXC/LXD), because the LXD driver actually supports multiple partition images with an fstab parsing but we deemed this not worthy of porting to LXC.

1 Like

Indeed, my mistake may also be from there, my old platform worked with the LXD driver.