MooseFS optimized driver

Hello.

At our work we were using LizardFS DM_MAD and LizardFS TM_MAD thanks to @cdaffara :+1:

Unfortunately, as we explained back in 2022 LizardFS is excluded from Debian since Bullseye.

We tested a migration path to MooseFS and inspired by other drivers we made a simple driver for our use case

The backup is not yet supported but thanks to MooseFS 4 backported to stable, we will add it with incremental support thanks to MooseFS patch tools.

1 Like

Thank you for this I have been dabling with moosefs for a while for storing my media and music I canโ€™t wait to test this on my next setup

1 Like

Thanks, I rellay like the MooseFS filesystem (or LizardFS when it was supported by Debian) because it permits to start quickly with a single node cluster and then add more nodes.

1 Like

Question are you doing hyperconverged using the chunkservers for kvm host? If so are you mounted in yours datastore , symbolic links or you change the datastore path?

Hyperconverged setup, I have 2 exports, one for the frontend and one for datastores

# Access of the frontends to the OpenNebula files
192.168.10.1/29    /opennebula               rw,maproot=0

# Access of the datastores by hypervisors
192.168.11./24     /opennebula/datastores    rw,maproot=0

The frontend mount /opennebula under /var/lib/one and hypervisors mount /opennebula/datastores under /var/lib/one/datastores.

The drivers use the datastore BASE PATH attribute so it should work if you change it but I did not test/

Cause with other shares the seem to suggest symbolic links in the documentation what does yours fstab look for mounting if you donโ€™t mind sharing

Cause with other shares the seem to suggest symbolic links in the documentation

I do not really understand this sentence, could you explain and share the documentation?

For the fstab:

  • my frontend use
    mfsmaster:/opennebula	/var/lib/one	moosefs	_netdev,mfsdelayedinit,noatime,nodev,nosuid	0	0
    
  • my hyperconverged hypervisors use
    mfsmaster:/opennebula/datastores	/var/lib/one/datastores	moosefs	_netdev,mfsdelayedinit,noatime,nodev,nosuid	0	0
    

I declare several datastores default, prod, etc, which all are stored on the same MooseFS cluster but I set different goals (soon this will became classes with MooseFS 4) and trashretention depending of the needs.

1 Like

you know what? now I cant find it, I am sure I read it when I was researching deploying one-deploy playbooks. But looking at it now it is not what I remembered reading. So forget my last statement

As a matter of fact, all MooseFS datastores must be on the same mountpoint for mfsmakesnapshot to work, otherwise you will got an error message both elements must be on the same device.

1 Like