Sharing HV directory to LXD container

For my simple home server I was pointed by to OpenNebula. I deployed MiniOne hoping to get a (relatively) simple GUI (Sunstone) to manage my LXD containers.

The first and most important container on this machine is for Nextcloud and obviously, this is only useful if I can easily provide it with enough storage space.

The system has a 200GB ZFS partition in which I’ve created several filesystems for i.e. /home, /var/lib/one and /var/lib/lxd.

To me it seems logical to also use a separate filesystem for the Nextcloud data and then mount this filesystem into a location (directory) where the LXD container can access it.

Question is, how do I do this with OpenNebula?
The whole Storage part seems to be focused on images but that’s something I would want to avoid in this case (why have a disk image for user data on a ZFS filesystem only to have the disk image formatted with ZFS again?).

Perhaps OpenNebula is completely overkill for my situation, but it’s the only (actively maintained) GUI solution I could find which supports LXD.

Any help would be much appreciated :slight_smile:

Versions of the related components and OS (frontend, hypervisors, VMs):

Hypervisor: Single i5 CPU with 8GB RAM, single 250GB SSD storage.
OS: Ubuntu 18.04.4 with OpenNebula MiniOne / LXD
Partitions: 50GB OS, ext4 + 200GB data, ZFS
VMs: currently 1 (container) - Ubuntu 18.04 LXD, running Nextcloud.
2-3 more LXD containers planned.

Hello, if you just want a GUI for LXD, it’s indeed a bit overkill :slight_smile: yet there is a lot of added value to standalone LXD. What you currently want isn’t supported in opennebula (dir passthrough) yet it should be possible to do with a simple workaround. Currently OpenNebula can detect container profiles defined on the nodes and apply that profile to a container. You can clone the default profile and modify it with the storage key required for the NextCloud storage.

1 Like

Thanks @dclavijo for your quick reply. I’ve looked at the link you provided and indeed that sounds like a solution to my issue. But I guess I just need a bit more info to actually make this work, i.e. where do I find this profile (which is something else than ‘Conf’ and ‘Template’ in the VM’s overview in Sunstone I suppose?) and how do I clone it?

Also, with “storage key”, you mean the command that’s described on , like “lxc config device add nextcloud ncdata disk source=media/ncdata path=/media/ncdata” but then translated into an LXD (or OpenNebula?) readable config (yaml?) file?

Is this something I should do from the terminal or can I find this in Sunstone somewhere?

Thanks again :slight_smile:

Edit - I first tried editing the container’s LXC yaml file directly through the terminal, but these changes didn’t persist once I booted the container. So apparently, OpenNebula pushes these files from elsewhere (I assume the VM template?).

After some tinkering I found where to update the VM template, so I’m ready to insert “LXD_PROFILE=<profile_name>” somwhere in there. Now just to find out where to find those profiles, so I can clone and edit them.

Edit 2 - I found how to clone and edit the LXD profile (simply type “lxc profile copy default ncdata”, then “lxc profile edit ncdata” in the terminal), and save it.

However, when I edit the VM template via Sunstone (VM -> Conf -> Update Configuration), the change not saved when I click ‘Update’. I don’t get any error messages or warnings, the edit is simply discarded.

I’ve tried adding:
LXD_PROFILE = “ncdata”
both at the top and bottom -> both discarded.
LXD_PROFILE=ncdata (as per -> same.

Edit 3 - Fixed it, finally! :smiley:
Apparently, changes to the template can only be made via “Info” and then entering “LXD_PROFILE” and “ncdata” under “Attributes”.
Booted the machine and BAM, we have a mounted ZFS dir. Hurray :partying_face:

So, to wrap it up, for anyone running into this problem again in the future - Here’s how you (can) do directory passthrough on OpenNebula (credits @dclavijo !):

  1. On the HV, using the terminal, copy the default LXD profile to a new one (I’ll call it ‘new’ here)
    $ lxc profile copy default new
  2. Edit the new profile
    $ lxc profile edit new
  3. My contents looks like this (be careful to add enough spaces before each statement, unfortunately they get lost on this forum. Without them, you’ll get errors):
    # limits.cpu: "1"
    # limits.cpu.allowance: 50%
    # limits.memory: 512MB
    description: Custom profile for dir passthrough
    # root:
    # path: /
    # pool: default
    # type: disk
    path: /media/dir
    source: /your/hv/dir
    type: disk
    name: new
    used_by: []
  4. Save the file and exit. If you didn’t add enough spaces, this is when you’ll get parser errors.
  5. In Sunstone, go to your VM
  6. Click ‘Info’
  7. Scroll down below ‘Attributes’ and add
    LXD_PROFILE and new in the open fields.
  8. Finally, click “+” to confirm your changes.

Now when you boot your container again, it should have the mounted dir.