Fs_lvm thin provisionning

Hello,
I am trying to implement thin provisionning on minione.
I have an old Dell server that I am using to test various things, but disk storage is quite low.
I am trying to setup fs_lvm datastore as a new fs_lvmthin.
I already have the right “free” storage (i rsynced tm/fs_lvm to tm/fs_lvmthin and changed [pgl]vs calls).

But, i am stuck on:

Error deploying virtual machine 1 to HID: 0. Reason: [one.vm.deploy] Image Datastore does not support transfer mode: fs_lvmthin

And it is not very clear for me to understand where I should declare “supported” TM ?
I thought there is not much difference with fs_lvm but I did not found any indication on /etc/one/* where it is declared ?
Can you give me any hints on that ?

Thank you !
Nicolas.

Every storage driver has to be enabled in /etc/one/oned.conf. The lvm drivers provided are fs_lvm and fs_lvm_ssh. This fs_lvmthin, if it is a 3rd party driver needs to be distributed to the frontend drivers at /var/lib/one/remotes and enabled in oned.conf with proper configuration.

Hello,
of course I setup that:

TM_MAD_CONF = [
    NAME = "fs_lvmthin", LN_TARGET = "SYSTEM", CLONE_TARGET = "SYSTEM", SHARED="YES",
    DRIVER = "raw", DISK_TYPE = "BLOCK"
]

and

TM_MAD = [
    EXECUTABLE = "one_tm",
    ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,fs_lvm_ssh,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt,fs_lvmthin"
]

#*******************************************************************************
# Datastore Driver Configuration
#*******************************************************************************
# Drivers to manage the datastores, specialized for the storage backend
#   executable: path of the transfer driver executable, can be an
#               absolute path or relative to $ONE_LOCATION/lib/mads (or
#               /usr/lib/one/mads/ if OpenNebula was installed in /)
#
#   arguments : for the driver executable
#       -t number of threads, i.e. number of repo operations at the same time
#       -d datastore mads separated by commas
#       -s system datastore tm drivers, used to monitor shared system ds.
#       -w Timeout in seconds to execute external commands (default unlimited)
#*******************************************************************************

DATASTORE_MAD = [
    EXECUTABLE = "one_datastore",
    ARGUMENTS  = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic,rsync -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter,fs_lvmthin"
]

Hello,
The TM_MAD is used for both SYSTEM and IMAGE datastore contexts. Some of the configurations are common for them but there are others that differ. It looks like you are missing, I don’t know the right word - “declare” or “whitelist” the image datastore. I mean the configuration variable TM_MAD_SYSTEM=<datastorename> and it’s companion variables LN_TARGET_<datastorename>, CLONE_TARGET_<datastorename>, DISK_TYPE_.

I am not sure what is your setup so only generic guidance can be provided.
So in the TM_MAD configuration of the IMAGE datastore you should use the above variables to “declare” how the system datastore behaves with the images of the given image datastore.
You could look at how the ceph driver is “whitelisted” for ssh and qcow2 system datastores, for example.

Also, note that after editing the oned.conf and opennebula service restart you should check that the configuration is in the datastore attributes in the database are in place too. If a variable is missing - you should add it - same as in the oned.conf. Usually, on the first edit all other variables from the oned.conf are populated too.

So play with the TM_MAD_SYSTEM, don’t forget to check/sync the variables in oned.conf and the database and you should get rid of the “Datastore does not support transfer mode” message…

I hope this helps.

Best Regards,
Anton Todorov

1 Like