GlusterFS support

I don’t see GlusterFS support anymore in Docs.

Is this removed? If yes, why ?

Here

That’s a shame. Forcing users to use fuse mounts and not native libgfsapi is bad, as fuse is much slower than direct access.

Additionally, forcing users to move from Gluster to Ceph is also bad, as Ceph is much more expansive to use (at least 3 mons, at least 3 OSD, … 6 servers as minimum suggested configuration where Gluster works with only 3 servers)

Moreover gluster in some environment is heavy used as a shared filesystem, CephFS is still unstable and not suggested as production use, this mens to spin up 2 different storage clusters in the same environment

As stated in the thread, the code is there and won’t be removed (it is not
mentioned in the docs, and interface) you can still use GlusterFS as
always. (or in its shared filesystem form).

Ok, but having to use an undocumented code in production doesn’t make me feel any better.
The code is there, but seems to be unmaintaned and not supported (as there isn’t any docs about this)

I’m really unsure if starting a new OpenNebula infrastructure or looking at something else with Gluster support.

Gluster is also mentioned in the official Reference Architecture for OpenNebula but there is no docs for it.
Very strange that the official reference talks about an unsupported feature.

1 Like

So, there is no way to get Gluster support back in future releases?
I would like to run a VM by using libgfapi and qcow2 and not the fuse mount that is very slow.
Is this supported?

Which are the missing feature? maybe someone could develop

no suggestions?

It all still works. I currently have a production setup running with a GlusterFS backend with the KVM VMs running via libgfapi and qcow2 on CentOS 7. Does it make me feel good it’s not “recommended”? no, but the ball went rolling on this setup before they stripped it’s officially supported status (yea we all read that Reference Architecture guide). Thankfully they don’t seem to have any intention of removing the code for it, just not…doing anything with it beyond that.

I feel exactly the same.
GlusterFS is officially supported as wrote on the OFFICIAL reference guide but the Gluster support was totally removed by Opennebula docs and I don’t know why.

Which kind of support was lacking ? Maybe someone could fix the issues and put the support back, but I don’t see any pending issues or missing features.

So, please totally remove gluster support or add that again in official docs. Don’t keep stale conditions like this.

I would rather they not strip the code that works perfectly well because you have a problem with the documentation or the whole “all or nothing” kind of stance. I believe the official reason was not a lot of people reported the use it in a poll (which i did not participate in so …ya know I can’t complain) and they have limited $$ to support things.

Did you ever decide what you were going to do? By a previous post you were trying to find a back end for your VM deployments. You could use another solution other than OpenNebula. oVirt supports GlusterFS just fine as well. I mean if I were building this now and needing to use AWS/CEPH wasn’t a possibility and I was going to use GlusterFS exclusivity I would not use OpenNebula.

Obviously i don’t want gluster removal, i would like to have docs back!
If gluster is still available, please give us the proper docs. It’s totally nonsense to remove docs for an existing feature.

OpenNebula has a big advantage: it’s easy and clean. OpenStack, in example, has TONS of components and it’s an overbloated software compared to OpenNebula. With opennebula you manage the whole IaaS platform from a single node: the controller. Clean and easy.

I don’t know any other IaaS like this and It’s a shame that Opennebula removed docs for one of the best cloud storage available that is making big steps forwards on every release. With sharding now even the healing of a huge VM is quick

Anyway , I don’t see any gluster reference in latest opennebule code.
Is that managed as a normal POSIX filesystem mount through fuse and thus there is no need to custom storage module?

In example, ceph has some custom modules defined. For gluster there isn’t anything similiar but gluster can be accessed by standard FS utilities.

So, is the gluster support still working and smart enough to run VM with libgfapi? I also don’t see any call to qemu with “gluster://” url schema

Here are some references that might help. https://github.com/OpenNebula/one/blob/master/src/vmm/LibVirtDriverKVM.cc#L610-L639

I tried to get behind the fuse setup and ran some benchmarks and it wasn’t going to happen. The KVM VM loads with the gluster:// parameter. I’m looking at the process on the host now and it’s running file=gluster://gluster1:24007/imageid

Here is my original datastore.conf i created when setting this up:
NAME = "glusterds"
DS_MAD = fs
TM_MAD = shared

DISK_TYPE = GLUSTER

GLUSTER_HOST = gluster1:24007
GLUSTER_VOLUME = opennebula

CLONE_TARGET="SYSTEM"
LN_TARGET=“NONE”

Edit: cleaned up my spelling a little

so it seems to use the libgfapi natively and not the fuse mount for running VMs
this is good.

Grave-digging this. I see it as horrible practice to leave the code and functionality there but not include it in the documentation.

It is extremely misleading. Is there anyone who would be able to link to documentation of any kind for setting up libgfapi on centos 7 for use with opennebula?

Hi everyone!

Support of this FS critical for us too.
We started our work with ONE because it had a native GlusterFS driver.
But now we obliged to use Datastore through FUSE.
It’s really awful moment.

I saw in nearby tread Javi Fontán said:

But as I understood - all thing in the hands of developers.
So glusterfs/ceph/lizard should be in ONE as it was.

Of course this is just opinion one of customers…
But maybe we can provide some help or provide couple developers?

QEMU on CentOS 7 is compiled with support for libgfapi so you really don’t have to do anything. When you setup the OpenNebula datastore as type GLUSTER, it passes the info you provide to the OpenNebula KVM node and then launched as an argument of qemu-kvm.

At least this was the case in 5.0.2 I haven’t moved on from this version yet. I’ll be testing the newer one shortly. Was this what you were asking?

Yeah, through a series of support requests I discovered that the datastore type GLUSTER was not suitable for system datastores.

Yea it’s not. That said there is an option here. Since the recommendation is to have both the image datastore and system datastore as shared, you could consider using NFS-Ganesha (which natively supports direct GlusterFS) to access the shared location rather than using FUSE to mount it.

The CentOS storage SIG Gluster area maintains a repo for it.