Fs_lvm Datastore Config

Hi folks,

Currenlty I’m trying configure fs_lvm datastore according this guide

But it seems that doe not working as expected, it seems that NodeHost with LVM config does not mount/read/make available the datastore using the proper VG named as documentation.

Currently my host/node are mapped the volumes using SAN. It follow the desgin.

Could someone please give me the correct direction about this config?

Thanks in advance!


1 Like


Can you provide the log of the failed VM? you can find it in the frontend /var/log/one/<VM_ID>.log

Kind Regards,
Anton Todorov

Hello Anton,

Well, I don’t have VM’s running in his environment yet.
As you can see the main problem it seems that datastores 103 and 104 are using the FrontEnd local filesystem, instead Node Volume Groups (500 GB each).

[root@opennebula ~]# onedatastore list
0 system - - 0 0 sys - ssh on
1 default 17.5G 90% 0 0 img fs ssh on
2 files 17.5G 90% 0 0 fil fs ssh on
105 lvm_system 17.5G 90% 0 0 sys - fs_lvm on
106 production 17.5G 90% 0 0 img fs fs_lvm on

All datastores has the same size.

Have you any idea abou fix it ?

best regards,


When you register a new image into the datastore you should place it in the production datastore: oneimage create ... -d production. It will be stored as a file, not as an LVM.

When the VM is instantiated, this file will be dumped to a new LV in vg-one-105 (which is the system DS). You can remove vg-one-106 as it will not me used, the naming schema of vg-one- is for system ds’s, not for regular image datastores.

Hi @jmelis.

Thank you by your feedback.
Yes , you are rigth, but the problem it seems the disk/dtastore it is only limited on frontend host.

Well, I think that now I have discovered the problem.
After read the code, it seems that variable BRIDGE_LIST it is needed.

After create the datastore with BRIDGE_LIST it works.

cat ds.conf
NAME = lvm_system
TM_MAD = fs_lvm

cat ds.conf
NAME = production
DS_MAD = fs
TM_MAD = fs_lvm
SAFE_DIRS=”/var/tmp /tmp"

1 Like

glad it worked!

Deleted. I found problem. Image datastore not use VG. Documentation is misleading! Why create LVM IMAGE_DS if it is not used in any way?

If it does say so we should remove it! As far as I can see it says that images are stored as regular files:

Images are stored as regular files (under the usual path: /var/lib/one/datastores/) in the Image Datastore, but they will be dumped into a Logical Volumes (LV) upon virtual machine creation. The virtual machines will run from Logical Volumes in the node.



I made an identical topology as shown in figure https://global.discourse-cdn.com/standard11/uploads/opennebula/original/2X/8/8e024f9fead81eb67eae73c9067c7288051dcb67.png, except that I added another KVM node.

On the opennebula frontend, I made two datastores as shown in the picture (lvm_script with id 110 and production with id 111, in my case). I added variable BRIDGE_LIST = node1.kvm.lvm node2.kvm.lvm.

Both of my nodes are connected to SAN on the same “volume”. From it I made two partitions from which I created two LVM physical volumes. I created LVM volume groups under the name vg-one- <system_ds_id> (vg-one-110 and vg-one-111). Each volume group has one physical volume.

The question is:
What should I do now with LVM physical volumes and LVM volume groups? Should I create a LVM logical volume and under what name? Or does it work open-nebula automatically? I do not understand the instructions:https://docs.opennebula.org/5.4/deployment/open_cloud_storage_setup/lvm_drivers.html#datastore-layout
It shows that there are LVM logical volumes.

Also, it’s not clear to me which filesystem type (E.g. ext4) should be and where to mount?

The documentation says:

"Frontend Setup

The frontend needs to have access to the datastore images, mounting the associated directory. "
I do not understand what I need to mount in the frontend?

"Node Setup

Nodes need to meet the following requirements:

LVM2 must be available in hosts. OK.

lvmetad must be disabled. Set this parameter in /etc/lvm/lvm.conf: use_lvmetad = 0, and disable the lvm2-lvmetad.service if running. OK.

Oneadmin needs to belong to the disk group. OK.

All the nodes need to have access to the same LUNs. OK.

A LVM VG needs to be created in the shared LUNs for each datastore following name: vg-one- <system_ds_id>. This just needs to be done in one node. OK.

Virtual Machine disks are symbolic links to the block devices. However, additional VM files like checkpoints or deployment files are stored under / var / lib / one / datastores / . Be sure to have enough local space. Ok.

All nodes need to have access to the images and system datastores, mounting the associated directories." I do not understand where to mount?

Thanks in advance!