Recently I came to know that for live migration of VMs, shared storage(e.g. nfs) is necessary. So I tried to configure my /var/lib/one directory on my Frontend as shared using nfs but I am not able to do it. Can anybody please help me with a good documentation for configuring nfs for Opennebula. I have already tried multiple approaches to achieve it but I have been unsuccessful.
while you are trying to migrate the vm. Then post what is being logged here.
But maybe I misunderstood your question, what exactely do you mean by stating: " So I tried to configure my /var/lib/one directory on my Frontend as shared using nfs but I am not able to do it"
Please keep in mind that once you have the NFS set-up you must change the TM_MAD of the default SYSTEM datastore from SSH to SHARED or create new SYSTEM datastore with TM_MAD=shared.
I tried the quickstart guide approach but it didn’t work for me. After configuring on nfs server, I tried the showmount -e command on server but it always gives RPC timed out error which is the same case when I try to mount on clients.
After a clean install first delete your datastores through sunstone. Then remove your datastore directory on your nodes i.e. /var/lib/one/datastores. And recreate them, mount your nfs share on each of them and then chown to oneadmin:oneadmin. So on each node your datastore path (/var/lib/one/datastores) should point to your nfs mount (make sure it mounts after reboot)
Then create system and images datastores as follows:
su - oneadmin
vi images.txt
NAME = nfs_images
DS_MAD = fs
TM_MAD = qcow2
vi system.txt
NAME = nfs_system
TM_MAD = shared
TYPE = SYSTEM_DS
Now you should be able to see you datastores through sunstone & cli:
[oneadmin@node01 ~]$ onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
100 nfs_system 5.1T 96% 0 0 sys - shared on
101 nfs_images 5.1T 96% 0 16 img fs qcow2 on
Did you ever manage to get this working ?
I followed the guide and also the links but just like a previous post I made our NFS datastore has a zero size
oneadmin@host1:~$ onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
100 nfs_system 0M - 0 0 sys - shared on
101 nfs_images 0M - 0 0 img fs qcow2 on
This is a dev environment where the frontend and node is one the same server.
I believe I have mounted the NFS correct as per the guidance from Anton
Hi everyone, I have some problems with the use of GlusterFS. I have followed the official guide (https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/#step-1-have-at-least-two-nodes) and finally i have 2 nodes each one with the /data/brick1/gluster folder. I want to use the glusterfs only for the image datastore for both the nodes but i don’t know how to do this correctly.
I tried to mount the gluster on a new datastore (/var/lib/one/datastores/101) and with this solution when i create a new image opennebula save it on the gluster. But on the secondo node ( not the frontend) what have i to do? I think that a solution can be this -> http://docs.opennebula.org/5.4/deployment/open_cloud_storage_setup/fs_ds.html
Any solution for achievement this procedure will be very useful for me.
Thanks everyone!
Hi, after I have read some posts about this, i have another doubt! Some topics say that I have to mount the gluster on the datastore (for instance: mount -f glusterfs frontend:/gluster /var/lib/one/datastore/101 for both nodes) and some topics about the use of symlink after a i mount the gluster in /mnt/images for example (ln s /mnt/images /var/lib/one/datastore/101). Actually, i want to save my images from susntone on this shared gluster so that when a node instantiate the vm it take the image from it. Sorry for my confusion about this concept.
I have created the datastore 101 from sunstone specifing the shared mode. Many thanks!