Thanks, I rellay like the MooseFS filesystem (or LizardFS when it was supported by Debian) because it permits to start quickly with a single node cluster and then add more nodes.
Question are you doing hyperconverged using the chunkservers for kvm host? If so are you mounted in yours datastore , symbolic links or you change the datastore path?
Hyperconverged setup, I have 2 exports, one for the frontend and one for datastores
# Access of the frontends to the OpenNebula files
192.168.10.1/29 /opennebula rw,maproot=0
# Access of the datastores by hypervisors
192.168.11./24 /opennebula/datastores rw,maproot=0
The frontend mount /opennebula under /var/lib/one and hypervisors mount /opennebula/datastores under /var/lib/one/datastores.
The drivers use the datastore BASE PATH attribute so it should work if you change it but I did not test/
I declare several datastores default, prod, etc, which all are stored on the same MooseFS cluster but I set different goals (soon this will became classes with MooseFS 4) and trashretention depending of the needs.
you know what? now I cant find it, I am sure I read it when I was researching deploying one-deploy playbooks. But looking at it now it is not what I remembered reading. So forget my last statement
As a matter of fact, all MooseFS datastores must be on the same mountpoint for mfsmakesnapshot to work, otherwise you will got an error message both elements must be on the same device.