NFS + Local Storage

Hi there,

I am trying to setup NFS & Local Storage.
If I deploy a VM to run on a non-NFS datastore then it runs ok on the kvm node’s local storage, however when I undeploy the VM it is stored on the frontend’s local storage instead of NFS.
If I deploy a VM to run on a NFS datastore, then it does not use the kvm node’s local storage and runs the live disk directly on NFS which is not what I want.

I have NFS shares mounted on both the frontend and kvm nodes using the following in my fstab;

10.100.100.10:/mnt/hddPool1/OpenNebula/datastores/100 /var/lib/one/datastores/100 nfs defaults,soft,intr,_netdev,rsize=32768,wsize=32768,x-systemd.device-timeout=9 0 0

10.100.100.10:/mnt/hddPool1/OpenNebula/datastores/101 /var/lib/one/datastores/101 nfs defaults,soft,intr,_netdev,rsize=32768,wsize=32768,x-systemd.device-timeout=9 0 0

My datastores are shown below, the NFS datastores have TM_MAD=shared set and the other default datastores have TM_MAD=ssh

Just wanted to also clarify that I do have TM_MAD_SYSTEM="ssh" added to my VM templates.

Should I remove the NFS mounts from the kvm host?
I imagine this would accomplish what I require, despite it not being what the docs suggest?

NFS/NAS Datastores — OpenNebula 6.2.2 documentation

Hi @Spookily,

If I deploy a VM to run on a non-NFS datastore then it runs ok on the kvm node’s local storage, however when I undeploy the VM it is stored on the frontend’s local storage instead of NFS.
If I deploy a VM to run on a NFS datastore, then it does not use the kvm node’s local storage and runs the live disk directly on NFS which is not what I want.

The behavior you’re describing looks good to me. Could you provide more details regarding what you’re trying to achieve?

Hi @cgonzalez

I would like the running VM to use the local storage of the kvm host, and when undeployed I want it stored in a datastore on the NAS.

At the moment, if it runs from the local storage of the kvm host, it is undeployed to the local storage of the frontend node.

I have read that exact doc many times before posting, it is not working for me as described in that doc.

I think the simplest way of configuring that environment will be to use ssh datastores, so the local storage of the hypervisor is used.

After that in order to ensure that the VMs are stored in your NFS server when they are undeployed, you can just mount or use a symbolic link to force the corresponding DS folder (i.e /var/lib/one/datastore/<ds_id>`) is using the NFS storage in the Front-End node.

That had crossed my mind as a temporary work-around, but is not ideal.
This would mean the data is effectively being transferred twice, and makes the front-end a single point of failure.
I would prefer to have the kvm host transfer the undeployed VM to the NFS store directly if possible.

Hi @Spookily,

That’s it’s not supported, note that if you’re using ssh drivers (for using local hypervisor node storage) OpenNebula cannot know if you have or not an NFS server available at those node. So when undeploying a VM which is using the ssh driver, in order to free the resources in the hypervisor node the VM will be transferred to the frontend node always.

The behavior you described can be achieved using the shared driver. For this scenario OpenNebula is aware that there’s a shared storage configured between the hypervisor nodes and the frontend and avoid transferring the VM.

But currently you cannot merge both of them.