Libvirtd running as root tries to access oneadmin NFS mount: error: can't canonicalize path

Hey Guy’s,

I’ve followed the WiKi to install OpenNebula for a quick run, but when it tries to make a VM, it’s engaging the libvirtd daemon, which itself runs as root, to create the VM then fails with a “can’t canonicalize path” message. Checking further it seems virsh, or some function in it, is trying to access the NFS mount as root. I have squash_root enabled and for security reasons I need to keep it. But then how do I get around it? Should I just point VM creation to the node’s existing datastore (ie SAN, DAS) or is there a way around this to allow VM creation on the NFS mount wo/ using no_root_squash?

The direct VM creation on the node works without an NFS mounted but for future NFS usage, I would like to resolve it so it can use NFS successfully. I considered running libvirtd as a non-root user but it unsurprisingly asked for root credentials to start up.

[oneadmin@mdskvm-p01 ~]$ virsh --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0
error: can’t canonicalize path ‘/var/lib/one//datastores/0/38/disk.1’: Permission denied

[oneadmin@mdskvm-p01 ~]$ ps -ef|grep -i libvirtd
root 12564 1 0 00:18 ? 00:00:00 /usr/sbin/libvirtd
oneadmin 14853 12479 0 00:23 pts/1 00:00:00 grep --color=auto -i libvirtd
[oneadmin@mdskvm-p01 ~]$

As oneadmin, I tested file creation on the NFS and it’s working fine as oneadmin. I synced the permissions of the mount point to the remote /var/lib/one from the controller but this didn’t help either. Any chance as well to disable canonicalization of paths in libvirtd ?


hi TomK,

what you’ve probably missed is allowing oneadmin to control libvirtd on your hosts.
We have the exact same setup and made sure that the UID on the NFS host and the opennebula servers is the same (in our case we used 9869 for UID user and UID group) and made him member of the following groups on the virt. servers:

id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),27(sudo),113(kvm),114(libvirtd)

Then you configure libvirtd to allow oneadmin to control libvirtd as root, locally, on every hypervisor server.
> 2.5. Configure Qemu
> The oneadmin user must be able to manage libvirt as root:
> cat << EOT > /etc/libvirt/qemu.conf
> user = “oneadmin”
> group = “oneadmin”
> dynamic_ownership = 0

Restart libvirt to capture these changes:

service libvirt-bin restart

Like this, we can leave squash_root as NFS option. This is just how we have done it, there’s probably many more (prob. even better) ways to do it.
Hope this helps you with finding a solution. :slight_smile:

Hey Roland,

First, apologies. I tossed this into the development channels by accident.

Second, I was following this:

I read online that I should try to add it to the libvirtd group. Which I did. But not yet to the sudo/kvm groups. I’ll try that next. The rest of my configuration was identical to what you typed below. My uid/gid also matched between controller and node so that appears to be ok.

So again I’ll try the sudo/kvm group addition to oneadmn and and test drive it.

I really should try Ubuntu it seems. Is this where development is done on OpenNebula, Ubuntu?


If this doesn’t belong in development, please move to support. However I tried the suggestions above, with the exception of the sudo group, which doesn’t exist on RHEL 7 clones, but no luck:

[oneadmin@mdskvm-p01 ~]$ id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm)
[oneadmin@mdskvm-p01 ~]$


it is not mentioned in the quickstart for Centos, only for ubuntu, but could you paste the content of /etc/libvirt/qemu.conf ?
Using Ubuntu or Centos shouldn’t matter, opennebula works fine on both distro’s.
As you mentioned, access to NFS as oneadmin is no problem, just starting a VM using libvirt (as root) is a problem, so that points to permission issues.
If I check my virt. hosts, I see oneadmin started any libvirt processes, you still have root as the libvirt user.

with the change to qemu.conf oneadmin is able to use libvirt. I’d guess it is either that, or selinux might be a problem ?

Thanks Roland. Here’s the file:

[root@mdskvm-p01 ~]# cat /etc/libvirt/qemu.conf
user = "oneadmin"
group = "oneadmin"
dynamic_ownership = 0
[root@mdskvm-p01 ~]# getenforce
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 qemu]# grep -v “#” /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="–listen –config /etc/libvirt/libvirtd.conf"
[root@mdskvm-p01 qemu]#
[root@mdskvm-p01 qemu]# grep -v “#” /etc/libvirt/libvirtd.conf|sed '/^\s*$/d’
log_level = 1
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
listen_tls = 0
listen_tcp = 1
auth_tcp = “none”
[root@mdskvm-p01 qemu]#

On the node:

[root@mdskvm-p01 ~]# getenforce
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# ls -altrid /var/lib/one
68718089 drwxr-x—. 5 oneadmin oneadmin 70 Apr 5 21:37 /var/lib/one
[root@mdskvm-p01 ~]# mount /var/lib/one
[root@mdskvm-p01 ~]# ls -altrid /var/lib/one
1405 drwxr-x— 12 oneadmin oneadmin 4096 Apr 6 20:20 /var/lib/one
[root@mdskvm-p01 ~]# mount | tail -n 1
tmpfs on /run/user/9869 type tmpfs (rw,nosuid,nodev,relatime,size=7405336k,mode=700,uid=9869,gid=9869)
[root@mdskvm-p01 ~]# mount | grep ".70" on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=
[root@mdskvm-p01 ~]#

On the controller:

[root@opennebula01 ~]# getenforce
[root@opennebula01 ~]#

Also noticed that the folder has SELinux ACL’s on it due to adding in context rules:

# /var/lib/one/ nfs context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto

so I remove it from the /etc/fstab mount line and retry the virsh command:

[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0
error: can’t canonicalize path ‘/var/lib/one//datastores/0/38/disk.1’: Permission denied

but same thing. I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:

2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can’t canonicalize path ‘/var/lib/one//datastores/0/38/disk.1’:

Comment is: “The current implementation works for local
storage only and returns the canonical path of the volume.”

But it seems the logic is applied to NFS mounts. Perhaps it shouldn’t be?


I’ve checked our setup, the only difference (besides using ubuntu) is NFSv3 instead of NFSv4.
Since you use sec=sys, you could try if that makes a difference.
From the message quoted, it seems access to NFS is not an issue, the problem starts when the disk-image is accessed (disk.1) - is it possible to access/read the disk.1 file as oneadmin from a node over NFS ?

The /var/lib/one/datastores/0/38/disk.1 file is accessible on the NFS mounted on the node mdskvm-p01:

[root@mdskvm-p01 ~]# su - oneadmin
Last login: Wed Apr 6 22:15:29 EDT 2016 on pts/1
Last failed login: Sat Apr 9 10:38:58 EDT 2016 from opennebula01 on ssh:notty
There were 10144 failed login attempts since the last successful login.
[oneadmin@mdskvm-p01 ~]$ pwd
[oneadmin@mdskvm-p01 ~]$ whoami
[oneadmin@mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1
34642274 -rw-r–r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one//datastores/0/38/disk.1
[oneadmin@mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1
/var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data ‘CONTEXT’
[oneadmin@mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head

[oneadmin@mdskvm-p01 ~]$

I’ll see what I can do throwing in NFSv3 on CentOS 7. Not sure if NFSv3 would be CentOS7 ready or not. That OS is significantly different from it’s predecessors. I’ve also posted on the libvirtd mailing lists to see what feedback I get. Will let you know what they say on that. Do you have a CentOS 7 environment on your end to see if you can reproduce the same issue?


I checked on my CentOS 6 installation and the following package is available there, not on CentOS 7. Only NFSv4 is available on CentOS 7. Even if this works and NFSv4 doesn’t, my next ask would only be how to make it work with NFSv4:

unfs3.x86_64 : UNFS3 user-space NFSv3 server

I’m not yet that knowledgeable with OpenNebula however I would expect the recommended practice for configuring an OpenNebula infrastructure is to mount the SAN / NFS volumes on the nodes and importing them as datastores into the OpenNebula controller for VM provisioning. Is this correct? It is only if I wanted something mounted for live migration purposes would I then use NFS mounted across Nodes and the OpenNebula controller?


You probably want to jump in on that libvirtd thread I have with redhat on the libvirt-users and libvir-list mailing lists. It’s turning out to be an interesting thread on this topic.


Adding in o+rx to the /var/lib/one, per the virtlib mailing lists, did the trick to remove the cononical message however I don’t believe that was the intent of how OpenNebula was designed /var/lib/one should have no other permissions. Is this correct?


Appears there may be a change in NFS and root_squash that has to be addressed. John Ferlan from RH would have a look.

chmod o+x on the folder does the trick. So can skip the r flag for now as a workaround.