Weird Virsh/network issue/post upgrade issue

For the life of me I can’t figure this out - I’m about 24 hours in now lol

Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/attach_nic '01363b51-0bad-43b5-9017-347b56375485' '02:00:ac:10:64:02' 'minionebr' '-' 'bridge' 'one-88-1' 88 vhost2.maas
Mon Aug 30 16:13:03 2021 [Z0][VMM][E]: attach_nic: Command "virsh --connect qemu:///system attach-device 01363b51-0bad-43b5-9017-347b56375485 <(
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: cat <<EOT
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: <interface type='bridge'> <source bridge='minionebr'/> <mac address='02:00:ac:10:64:02'/> <target dev='one-88-1'/> </interface>
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: EOT
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: )" failed: error: Failed to attach device from /dev/fd/63
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: error: internal error: unable to execute QEMU command 'device_add': Bus 'pci.0' does not support hotplugging
Mon Aug 30 16:13:03 2021 [Z0][VMM][E]: Could not attach NIC 1 (02:00:ac:10:64:02) to 01363b51-0bad-43b5-9017-347b56375485
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: ExitCode: 1
Mon Aug 30 16:13:03 2021 [Z0][VMM][I]: Failed to execute virtualization driver operation: attach_nic.
Mon Aug 30 16:13:03 2021 [Z0][IPM][D]: Message received: ATTACHNIC FAILURE 88 Could not attach NIC 1 (02:00:ac:10:64:02) to 01363b51-0bad-43b5-9017-347b56375485

It doesn’t overly matter what I do in terms of the network setup itself, this seems to be an issue with the actual attachment operation

happy to provied any additional information - VM’s and hosts were working prior to updating to 6.0 - ‘error: internal error: unable to execute QEMU command ‘device_add’: Bus ‘pci.0’ does not support hotplugging’ appears to be the issue but my google fu isn’t directing me toward anything

Maybe it is not related, but did you run onehost sync to update the hosts scripts. In particular /var/tmp/one/vmm/kvm/attach_nic maybe outdated.

As the new versions uses rsync, it is safer to remove /var/tmp/one from the hypervisors and then perform a onehost sync --force…

Cheers

Hi,

I ended up throwing my hands up and rebuilding the entire cluster about an hour after my first post - apologies for not coming back and updating

Good part is that it was simply a case of reprovisioning the hosts in Maas and a quick run of an ansible playbook (after a couple of wee modifications) and my hosts were back up and running (I did lose VM’s but this is a lab env so that’s not a huge deal and these VM’s were ‘provisioned’ VM’s as well)

I do have another issue where almost everything is working but I now can’t migrate VM’s - but I think I need to set up some new datastores for that one

OK, Thanks for the update :slight_smile:

Good Luck!