Difficulty understanding how to use Remote datastore


I am new to Open Nebula and I’m having some trouble understanding datastores and would appreciate any advice.

We have setup the front-end and a KVM node and have a quite large 30TB FreeNAS machine we’d like to use to store our cloud data on. I can mount the NFS from both the front-end and KVM node without issue however I do not know how to create a datastore using this NFS as I do not see anywhere to specify a remote host. I’ve read through so much documentation and I cannot seem to find anything in regards to a remote datastore.

Has anyone successfully mounted a remote NFS as a datastore before? What am I missing here?

Does anyone have any advice on this?


Hi Zac,

In brief you sould mount the NFS filesystem on all nodes manually or via fstab and create symlink from /var/lib/one/datastores/ to the NFS mount point.

More details here:

Your use case sound like filesystem datastore in ‘shared’ mode.

Kind Regards,
Anton Todorov

Kind Regards,
Anton Todorov

1 Like


You must to use a Shared Datastore.


At /var/lib/one/datastores you must to link to your nfs mounted dir.


I mount my NFS export at /mnt/opennebulaA and create the images and system on it.

Then link to the /mnt/OpenNebula/system and /mnt/OpenNebula/images to the /var/lib/one/datastores/0 and /var/lib/one/datastores/1

I hope helps to you…

1 Like

Thank you very much guys, that makes perfect sense!