Which shared storage is better to use with Opennebula?

Hi all,

Now a days we are planing to use more scalable HA opennebula server . Now we are using GlusterFS for that .
Glusterfs is working quite good , but recently we found that its using more CPU resources .

Can anyone advice me that , which shared storage service is better ? Ceph , Lizardfs…etc…?

+1 for Lizard FS. Unparalleled robustness, extremely simple to use (exposes a posix filesystem) file level replica settings, geo replica, atomic metadata based snapshots. Easy to handle on failure of nodes or disks, easy on resources and everything is checksummed. No need of raid setups. It’s one of the pillars of NodeWeaver HCI, that is based on Open Nebula, we have it in production in many sites and we’re more than happy with it (both NodeWeaver/OpenNebula and Lizard itself).

Hi,
Thanks for your prompt reply .
I have created Lizardfs for getting OpenNebula HA server. Every thing went right , but when mounting filesystem using mfsmount its mounting with root:root permission .
I changed export settings with oneadmin permission . But no hope.
Inside mount point i can able to change permission .
Do i need to provide any extra configuration?

mkdir /var/lib/one
chown oneadmin:oneadmin /var/lib/one
mfsmount /var/lib/one
chown -R oneadmin:oneadmin /var/lib/one is giving “You have no permission to change privillages”

ls -l /var/lib/one is giving root:root permission result .

Because of this I cant able to add KVM hypervisor to opennebula . When checking oned.log its giving permission denied messages. Please help .

Maybe a root_squash problem? Despite what the documentation says (http://docs.opennebula.org/4.14/design_and_installation/quick_starts/qs_centos7_kvm.html) root_squash did not work in my (NFS) test setup: https://github.com/marcindulak/vagrant-opennebula-tutorial-centos7/blob/1b1483a7ed3f523b09681c84b2a3777554848cd3/Vagrantfile#L210

Sorry for late reply.

Mount lizard as root once (eg in /mnt) and chown to oneadmin the whole tree.

Then chown the empty target /var/lib/one

Then mount again and check permissions.

Eventually script the chown after mount.

Hi lorenzo_faleschini,

I have tested with LizardFS and everything went smoothly .
But i have a doubt , I don’t know this question is applicable here or not.
When mfsmaster server goes down we have to change mfsshadow as master and reload all services ,right?
But when we are implementing this into production system , if suddenly master server fails , How can we manually switch to this setting as soon as ? Or if we got 1 or 2 minute to setup this shadow to master then what will happen for that offline data ,when users are trying to write on
that time? Is there is any HA possibilities available?

You have to set up your own “trick” to automatically change the personality of the shadow to a master and reroute mountpoints. You can use any HA technology to achieve this such as keepalived, heartbeat, ucarp, whatever…

What happens to vms and users waiting to write while the master is down? They can get timeout or IO errors in vm. If you wait too much you may have to hard reboot guests.

Little commercial tip: If you want a ready to use, production ready, blend of KVM+OpenNebula+Lizard FS with professional support, complete HA, autorecovery and extreme tuning you can have a look at NodeWeaver (http://www.nodeweaver.eu) it’s a commercial solution and comes from many years of experience with OpenNebula, KVM and LizardFS (formerly MooseFS).

Thanks a lot lorenzo_faleschini .

1 Like

Hello James,

I am trying to build Opennebula cluster with lizardfs datastore. Could you share some guideline to integrate lizardfs with Opennbula? I have tried the guide of github but its not activated as described.

Br
Mosharaf