Creating basic LVM datastore with OpenNebula 5.2

Hi all,

Firstly I’m a beginner with OpenNebula so please forgive my ignorance :slight_smile:

Objective & environment

I am trying to create LVM datastore, following 5.2 version Deployment Guide guidelines. This is a fresh installation, not much has been customized. Datastore I want to use is connected to the KVM hosts with FC connection from a SAN appliance. KVM hosts are booting from local disk. Frontend is running in a separate instance.

Current configuration

I have configured the hosts as per Node Setup guidelines and system datastore Volume Group is named accordingly (vgscan output):

Reading all physical volumes. This may take a while…
Found volume group “one_images” using metadata type lvm2
Found volume group “vg-one-107” using metadata type lvm2

The problem

New instances would seem to be deployed to the LVM datastore (checked from Sunstone’s placement tab) and are running fine. However no new LVM Logical Volumes are created with instances (so the default system datastore is being used).

Maybe I am missing some configuration? Either way any help would be much appreciated!

Hi lavisa,
welcome to this forum.

Though I haven’t used your scenario if you’ve followed the docs and your KVM nodes can access the LVM system datastores it should work fine. If your hosts are using the default system datastore when instantiating the VMs, what happens if you disable it with the following command: onedatastore disable 0 and instantiate new VMs? Do they use the LVM system datastore?

You can enable the default system datastore with onedatastore enable 0, and there are other ways to force using a datastore but this is the quickest test for me :smiley:

Cheers!

Thanks!

I’ve tried to disable default system datastore: it would seem that new instances are now stuck in PENDING state (this is reversed by enabling the datastore again). Instance log files are empty.

EDIT: I had error with my test, LVM system datastore was also disabled. When it is enabled instances are deployed as described before, no new VM Logical Volumes are created.

I noticed it is not possible to manually create LVM Logical Volumes as oneadmin user:

lvcreate --name data -l 100%FREE vg-one-107

This works fine with root account. Oneadmin belongs to the disk group.

But I suppose OpenNebula is using some more low level command because there are no errors related to this in the log.

Hi!
So as you’ve confirmed as the VMs are in pending state when default system datastore is disabled, then the LVM datastore can not be used.

As I haven’t used this scenario I’d have to read docs and check code to know more about it but let’s see if we can make it work together. I think that oneadmin user or any other user than root cannot run lvm commands at least at Redhat (https://bugzilla.redhat.com/show_bug.cgi?id=620571) by design so I guess that wouldn’t be a problem.

If you run a ls -l for the /dev LVM’s device… has the group disk rw permissions for that device? Maybe only can root has permissions and that’s why it doesn’t work, it sounds silly just to discard simple issues

Cheers!

Sorry I was bit unclear in my previous post (because I accidentally had both system datastores disabled at first, this caused stuck instance deployments). So this is what happens:

  • Both system are datastores enabled
  • Instances are deployed to the LVM datastore, but actually default system datastore is being used
  • LVM datastore is disabled
  • Instances are deployed to the default system datastore
  • Default system datastore is disabled
  • Instances are deployed to the LVM datastore, but actually default system datastore is being used

Oneadmin has both group and user rw permissions to the LVM block storage devices.

Hi!
sorry I misunderstood the problem :blush: and that’s not my typical scenario so please forgive me if my tests are not relevant to solve the issue.

Ok, can you store images in your LVM image datastore? E.g download a ttylinux-kvm from the marketplace and try to store it in the LVM image datastore. That way we know if the LVM image datastore runs fine.

Then try to instantiate a VM but before set the default system datastore 0 to disabled. Now we’ll check if nodes can use the LVM system datastore…

Let’s try and see!

Cheers!

No problem, I am also a bit confused with all the different datastore scenarios :slight_smile:

Unfortunately, I couldn’t get this LVM configuration working so I decided to go with the defaults. Also I removed the shared LUN and created a separate LUN for each host and mounted that to the default system datastore directory.

Thanks again for your help.

You’re welcome!
I’m glad you found and alternative. In any case this post will help others so their feedback may offer a solution in the future.

Have fun!

UPDATE 2: Scratch that. I forgot how LVM datastore was supposed to work :stuck_out_tongue:
Instantiation seems to work normally (when only LVM system datastore is enabled), but instead node local block device is being used & no LVM Logical Volumes are created. So the same issue remains.

Trying to rule out privilege I added permissions for oneadmin via sudoers & modified lvm binary with setcap. With this manual lv creation worked fine yet no lv was created during instantiation.

1 Like