Gluster system datastore


As I can see on, runing VMs may run on system datastores:

Looks that following the procedure on:

It should be possible to create a system datastore (this is the system datastore chapter) but as you can see in the example the dastore is an image datastore:

onedatastore list
0 system 9.9G 98% - 0 sys - shared
1 default 9.9G 98% - 2 img fs shared
2 files 12.3G 66% - 0 fil fs ssh
101 default 9.9G 98% - 0 img fs shared

Is there any procedure to create a gluster system datastore?

Thanks a lot,

Hi, if it is not possible, is there any way to run VMs on an image datastore?

Thanks a lot.

Morning oscar!,
I haven’t used Gluster in my OpenNebula setups so I can’t help you with that :frowning: but as you mentioned that you want to use Gluster with OpenNebula 5.2, these recent conversations in the forum will provide you info about Gluster and OpenNebula 5:

I’m sure that users in the forum that are working with Gluster will help you with your setup.

For you second question, you can’t run VMs on an image datastore, as OpenNebula transfers images from the image datastore to the system datastore in the node where the hypervisor runs the VM.


Hi Miguel Angel,

I have took a look to that notes but there is no explanation about how to configure gluster as system dataster (it does for image datastore).

I will have to wait anybody in this forum with experience in glusterfs as system datastore.

Nevertheles, I’d like to ask if this configuration is suitable. In my farm, I’d like to have two system datastores:
1.- GlusterFS system datastore to run servers (sunstone, sql, and so on…) that can migrate from one host to other (persistent)
2.- Local FileSystem system datastore to run Windows 7 VDI (persistent or not persistent)

In this scenario, how can I instruct OpenNebula to, on vm creation, to use one system datastore or the other one?

Thanks a lot.

I use OpenNebula with a GlusterFS datastore. Say you create your GlusterFS datastore, that one would be shared. The other one you would probably create as a SSH datastore, which means it will be copied to the local opennebula node when it’s created and then launched locally on that system.

When you create the “images” that your VMs will use as it’s hard drive, you can choose what datastore to use.

Example 1: I want to create a new server on my GlusterFS datastore.

  1. Create a new image (the hard drive for the VM), when creating it you will choose the GlusterFS datastore.
  2. This will create the file on that share. However, you need more than just a datastore to get this working. OpenNebula Front end needs to access the share using FUSE, so make sure it’s mounted to /var/lib/one/datastores/100 or whatever the ID of your Gluster datastore is.
  3. Depending on if you can use GFAPI for native access with KVM, the VM will use the GlusterFS directly without using FUSE (which is slow) and then leave the FUSE mounted datastore directory for everything else.
  4. If you don’t plan on using KVM with GFAPI you will need to simply create a shared datastore pointed to the FUSE Mounted gluster partition. If you are using CentOS/Redhat, GFAPI is already compiled and available for you to use.

To be honest, it’s hard to answer this without know your current level of experience with OpenNebula and what your setup is like. Maybe next week we can skype or something and I can answer your questions more fluidly and directly.

Hi Daryl,

I don’t have so much experience with OpenNebula but I have working hard latest two weeks with it.

What I want is creating a farm with 3 hosts and create two gluster volumes:
1.- Images datastore (GlusterFS)
2.- System datastore (GlusterFS)

As both volumes are shared between 3 nodes I want to deploy on int two centos servers (one for mysql, and the other one for sunstone). That way I will provide HA to sunstone.

Regarding to the access, I’m using GFAPI and looks system is using port 24007 to acceed the system.

I have been able to create the images datastore (Gluster) and create a soft link against the mount point. But I’d like to create the gluster System Datastore.

Thanks a lot.

Afterwards, I will start working with my vdi infraestructure.

Nevertheless, I have tried your suggested configuration, it is:

1.- Gluster datastore for images:
2.- Local ssh system datastore as system datasatore.

But it looks does not work either: I have pasted the logs:

I will feel very glad if we can work together during some minutes on my configuration in order to make it work!

Thanks a lot.


I have added a third disk for gluster system datastore test. In this moments I have:

1.- Gluster datastore for images (vdic-core)
2.- Local ssh system datastore as system datasatore (vdic-vm)
3.- Gluster datastore for “system datastore” (vdic-core-vm)

I have tried to configure my new vdic-core-vm gluster datastore as follows:

NAME = "vdic-core-vm"
TM_MAD = shared

# the following line *must* be preset

GLUSTER_HOST = localhost:24007
GLUSTER_VOLUME = vdic-core-vm-gv0

But I get the following error:

[oneadmin@vdicone01 ~]$ onedatastore create vdic-core-vm-ds.conf
[DatastoreAllocate] Invalid DISK_TYPE for a System Datastore.

Means glusterfs cannot be system datastore?

Thanks a lot.


Anybody has been able to make system work with gluster?

Thanks a lot.

What exactly dont work in your setup with Gluster?

I cannot create a gluster system datastore.

[oneadmin@vdicone01 ~]$ onedatastore create vdic-core-vm-ds.conf
[DatastoreAllocate] Invalid DISK_TYPE for a System Datastore.

I want to use gluster to have the live host migration feature.

Thanks a lot.

Ho oscar,

You should use the GlusterFS as pre-mounted filesystem on the hosts( and use shared or qcow2 TM_MAD drivers with DISK_TYPE=FILE

For example, mount a gluster volume on all hosts (for example in /gluster/vdic-core-vm/)
Create a system datastore with the following template

NAME = "vdic-core-vm"
TM_MAD = shared

Check the datastore ID and create symlink in /var/lib/one/datastores/ to /gluster/vdic-core-vm/ on all hosts. For example, if the system datastore id is 105

ln -s /gluster/vdic-core-vm /var/lib/one/datastores/105

Other option is to use qcow2 TM_MAD to have the files in qcow2 format.

This will leave the context and the volatile disks as files on the mounted shared gluster volume. and the VM disks from the gluster image datastore it will use libgfapi.

AFAIK Gluster pools are not supported by OpenNebula

Kind Regards,
Anton Todorov

Hi Anton.

Thanks a lot for your clarifications.

This means that if I’m using Non Persistent images all volatile data access (like windows pagefile) will be acceeded by mount point - FUSE access- (qcow2-shapshot or shared-copy). On the other hand, If I’m creating persistent images all data will be acceded by a link, and therefore again through the mountpoint (FUSE access).

Is this correct?


Thanks a lot.

I am not very familiar with gluster, but the image that you postt looks like not in sync with the current v5.2. I’ve take a quick look at the sources and the code in OpenNebula is in sort of “floating” state - there is support for libgfapi in the core(it can build XML configuration) but there are no drivers to manage gluster backed datastores.

IMO to use FUSE for as little as possible the best solution is to write a driver( DS_MAD and TM_MAD) to manage gluster backed image and system datastores. This way you could have libgfapi access to the perssitent, non-persistent and volatile disks on one side and the vm definition XML, the context iso and the checkpoint files on FUSE filesystem.
Here is the documentation regarding the storage drivers development

Kind Regards,
Anton Todorov

Hi Anton,

You are right, this screenshot comes from an old fashioned version and I have not been able to find a new version of this document:

Regarding to the developement of the DS_MAD and the TM_MAD drivers, if code has not been removed, accees to the images myb be done throug libgfapi:

I’m a simple system administrator and I dont feel confident to develope a stable driver for gluster. Do you? :wink:

Thanks a lot for your help.

I can confirm that you can use libgfapi to connect OpenNebula to GlusterFS. But afaik you need to install qemu with libgfsapi compiled in, by default its not compiled.

Hi Martin,

With centos quemu is compiled by default with libgfsapi.

Can you post your “system datastore” configuration?

Thanks a lot.

Hi, at least I have configured NFS4 acces using nfs-ganesha and looks work fine.

Thanks a lot!