jaimeibar
(Jaime)
February 22, 2023, 1:16pm
1
Hi,
I’m getting this error while powering off VM’s
apparmor="DENIED" operation="signal" profile="libvirt-e7930264-e170-4aa5-8625-8d558fa97c5d" pid=2434 comm="libvirtd" requested_mask="receive" denied_mask="receive" signal=term peer="libvirtd"
It started happening after upgrading Ubuntu 18 to Ubuntu 20.
Anyone else is having this issue?
Thanks
Jaime
Walhanharja
(Steven Walkiers)
December 19, 2023, 9:31am
2
Yes, we have the same problem. As running on 20.04, we recently discovered this problem. We emptied the machine, fully updated the machine, included AppArmor, and we tried to replicate the problem. The problem still is there. If anyone knows an answer to this problem, it would be much appreciated.
mkutouski
(Mikalai Kutouski)
December 19, 2023, 9:41am
3
@Walhanharja ,
have you tried as written in the OpenNebula docs ?
Depending on your OpenNebula deployment type, the following lines might be required at /etc/apparmor.d/abstractions/libvirt-qemu
profile:
/var/lib/one/datastores/** rwk,
Walhanharja
(Steven Walkiers)
December 19, 2023, 10:00am
4
Goodday mkutouski,
and thank you for your reply. I added the line to the libvirt-qemu file, reloaded the service and i will test it right away. I keep you posted if the solution was a ‘pointing out’ to my bad reading (again)
Walhanharja
(Steven Walkiers)
December 19, 2023, 10:42am
5
Hello again,
it was not the solution. When trying to poweroff the VM, we get this message:
[90707.723486] audit: type=1400 audit(1702981277.201:152): apparmor="DENIED" operation="signal" profile="libvirt-f1089794-57ad-4ef5-aa82-c7a4f9cf8208" pid=1235 comm="libvirtd" requested_mask="receive" denied_mask="receive" signal=term peer="libvirtd"
In VNC we see the machine: reached target power-off, but we think the VM cannot send its signal of power-off state back to the host.
When i ask for a virsh list, i got “one-160 in shutdown”, and this state stays.
We have a few nodes, but the problem only exists on this node. Do you have any idea’s left ?
Mayby my reading back in the days wasn’t that bad
mkutouski
(Mikalai Kutouski)
December 20, 2023, 9:11am
6
Could you, please, provide the following information:
OpenNebula version (oned --version
executed on your OpenNebula front-end node)
Libvirt version (libvirtd --version
executed on your hypervisor node)
Ubuntu version (lsb_release -a
executed on your hypervisor node)
Apparmor version (dpkg -s apparmor | grep '^Version:'
executed on your hypervisor node)
Walhanharja
(Steven Walkiers)
December 20, 2023, 10:53am
7
Could you, please, provide the following information:
Sure, with pleasure :) and thank you for looking in to this
OpenNebula version (oned --version executed on your OpenNebula front-end node)
oned --version
OpenNebula 6.0.0.3 (9284d740)
--> yes, we are going to upgrade asap, but were working on the errors first
Libvirt version (libvirtd --version executed on your hypervisor node)
libvirtd --version
libvirtd (libvirt) 6.0.0
Ubuntu version (lsb_release -a executed on your hypervisor node)
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
Apparmor version (dpkg -s apparmor | grep ‘^Version:’ executed on your hypervisor node)
dpkg -s apparmor | grep '^Version:'
Version: 2.13.3-7ubuntu5.3
I am very grateful to you for your help.
Hi,
I’m still having this issue, these are the versions we are running
> oned --version
OpenNebula 6.8.0 (bf030d2b)
> libvirtd --version
libvirtd (libvirt) 6.0.0
> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
> dpkg -s apparmor | grep '^Version:'
Version: 2.13.3-7ubuntu5.2
Hope that helps.
Thanks
Jaime
mkutouski
(Mikalai Kutouski)
April 23, 2024, 3:04pm
9
Sorry for the late reply.
It seems the problem is in absence of these two lines
signal (receive) peer=libvirtd,
signal (receive) peer=/usr/sbin/libvirtd,
in the /etc/apparmor.d/abstractions/libvirt-qemu
file on the hypervisor node.
So the solution seems like to add these two lines and restart apparmor.service:
systemctl restart apparmor.service