LXD - Guest disk not extending

Hello Guys. I’m trying to extend the LXD VM 20 GB drive from Sunstone, but the changes are not applied. Tried with both raw and qcow2 formats, verified the disk has only 1 partition (/dev/sda1 for root) as stated per https://docs.opennebula.io/5.12/deployment/open_cloud_host_setup/lxd_driver.html

The output of df while using raw format.

Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/loop17p1 ext4 20G 2.9G 17G 15% /
none tmpfs 492K 4.0K 488K 1% /dev
udev devtmpfs 63G 0 63G 0% /dev/fuse
tmpfs tmpfs 100K 0 100K 0% /dev/lxd
/dev/loop18 iso9660 374K 374K 0 100% /context
tmpfs tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs tmpfs 63G 0 63G 0% /dev/shm
tmpfs tmpfs 63G 288K 63G 1% /run
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 63G 0 63G 0% /sys/fs/cgroup
overlay overlay 20G 2.9G 17G 15% /var/lib/docker/overlay2/f7f4d8f3e5734d062510083150a261f356e3d856622ccb35805953c60a82f14f/merged
tmpfs tmpfs 196M 0 196M 0% /run/user/0

The image is based on Ubuntu 18.04 (on KVM disk resize works just fine). Is it something anybody can help me with? If you need any system information, I will share it.

Thank you in advance.

Hello,

the disk has only 1 partition (/dev/sda1 for root) as stated per LXD Driver — OpenNebula 5.12.13 documentation

When it says it is not supported on multiple partition images, it actually means images with no partitions at all (no partition table in the virtual disk, the disk itself is a partition), like the ones found in the LXD marketplace.

Look at the following layout

oneadmin@ubuntu2004-lxd-marketplace-5-12-6-3f160-0:~$ onevm list
  ID USER     GROUP    NAME                                                                                STAT  CPU     MEM HOST                                                          TIME
  10 oneadmin oneadmin alpine_3.12 - LXD-10                                                                runn    1    768M ubuntu2004-lxd-marketplace-5-12-6-3f160-0.test            0d 00h00
   9 oneadmin oneadmin b-4903-9                                                                            runn    1    128M ubuntu2004-lxd-marketplace-5-12-6-3f160-0.test            0d 02h14
oneadmin@ubuntu2004-lxd-marketplace-5-12-6-3f160-0:~$ lsblk 
NAME     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0      7:0    0 97.8M  1 loop /snap/core/10185
loop1      7:1    0 53.1M  1 loop /snap/lxd/11348
loop2      7:2    0  364K  0 loop /var/lib/one/datastores/0/9/mapper/disk.1
loop3      7:3    0  364K  0 loop /var/lib/one/datastores/0/10/mapper/disk.1
sda        8:0    0   60G  0 disk 
├─sda1     8:1    0 59.9G  0 part /
├─sda14    8:14   0    4M  0 part 
└─sda15    8:15   0  106M  0 part /boot/efi
sr0       11:0    1  366K  0 rom  
nbd0      43:0    0  256M  0 disk 
└─nbd0p1  43:1    0  255M  0 part /var/snap/lxd/common/lxd/storage-pools/default/containers/one-9/rootfs
nbd1      43:32   0    1G  0 disk /var/snap/lxd/common/lxd/storage-pools/default/containers/one-10/rootfs

VM 9 is Alpine 3.12 from the OpenNebula Marketplace, which is a KVM ready VM, you can see the partition layout even if it has only 1 partition.

nbd0      43:0    0  256M  0 disk 
└─nbd0p1  43:1    0  255M  0 part /var/snap/lxd/common/lxd/storage-pools/default/containers/one-9/rootfs

Then you see VM 10, which is Alpine 3.12 from the LXD marketplace, and you don’t see a partition layout, just the image with a filesystem format,

nbd1      43:32   0    1G  0 disk /var/snap/lxd/common/lxd/storage-pools/default/containers/one-10/rootfs

like the context disk which is a single filesystem on a loopback device

loop2      7:2    0  364K  0 loop /var/lib/one/datastores/0/9/mapper/disk.1
loop3      7:3    0  364K  0 loop /var/lib/one/datastores/0/10/mapper/disk.1

VM 10 should be able to be resized.

Thanks for pointing out this lack of accuracy on the documentation.

@dclavijo Thank you for your feedback on this.

Well, sad news for me, since I need to somehow extend the runnings VMs. Is there any dirty workaround for this? Are you planning to support multiply partition images for LXD containerization?

These operations are handled by the driver itself, at, you can say, OpenNebula level, whereas in KVM, the hypervisor extends the disk, and the internal OS makes use of it. If you want a specific feature, please open a feature request on our github repo then we can track its popularity.¬