I’m currently running 4.10 with the following datastores:
BRIDGE_LIST="tm-onhost1 tm-onhost2 tm-onhost3 tm-onhost4"
I’ve been having recurring issues with CLVM freezing and losing the ability to create new VMs. I noticed with OpenNebula 5, the LVM datastore no longer requires CLVM. I currently use gfs2 for the actual system and image directories, but would love to get rid of all the DLM related services.
If I move to version 5, I’m unclear how to provision the actual filesystems. Do I still need gfs2 or some kind of shared filesystem to hold the links to the LVM volumes? I guess I’m not fully understanding the documentation and how the image and system stores work in the new version.
It looks that there are quite a few people having problems with this.
In fs_lvm you require two datastores:
- An image datastore of type DS_MAD=fs and TM_MAD=fs_lvm. This datastore can have any name whatsoever. The images will be stored as files when registered. It will be stored in the frontend, or in any other node if using BRIDGE_LIST.
- A system datastore of type TM_MAD=fs_lvm. A corresponding VG must be created in the hypervisors (not in the frontend) called vg-one-<ds_id>. This VG must be manually created. Additionally, the lvmeta has to be disabled (read the docs for more info).
Hi @jmelis ,
Thank you by your explanation!
Thank you @jmelis for the explanation. Just in case others try the fs_lvm driver, some clarifications that aren’t clear in the documentation (or at least weren’t to me):
Create the two datastores per the documentation. You need to share (via NFS, gfs2) BOTH datastore directories to all the hosts.
You likely need a large lvm volume or other storage partition mounted on the /var/lib/one/datastores/‘fs_id’ directory, as that is going to need a lot of space. All images uploaded to the server will land there as files. Once again, this directory also needs to be shared out to all the hosts vi NFS or something.
You create a LVM vg for the datastore (vg-one-‘ds_id’ as @jmelis pointed out). The datastore directory (/var/lib/one/datastores/‘ds_id’) will get all the vm instance directories with the deployment files, disk links, and context disks. This has to be shared to all the hosts for live migration to work. When a VM is instantiated, the front end will drop the deployment and disk files in the /var/lib/one/datastores/‘ds_id’/‘vm_id’ directory. The host will create a logical volume in vg-one-‘ds_id’ for each disk needed and copy the image from the files datastore.
Once I finally figured it all out, it’s been a lot nicer to work with than the older lvm model which required clvm.