NetworkManager on CentOS7 KVM Compute Node?

In general, is it recommended to run NetworkManager on a CentOS 7 KVM hypevisor node or to use the old network.service and ifcfg files?

I am intending to use the 802.1Q and vxlan drivers for my OpenNebula 5.4 networking.

Thanks

Mike

Hello, I personally uninstall NetworkManager and run just network.service with ifcfg-* files. I have to just setup team0 device and one private device for corosync.

802.1Q driver use commands like this

sudo brctl addbr onebr.103
sudo ip link set onebr.103 up
sudo ip link add link team0 name team0.103 mtu 1500 type vlan id 103 gvrp on
sudo ip link set team0.103 up
sudo brctl addif onebr.103 team0.103

so I don’t see any need have NetworkManager.

I personally also recommends Fedora 26 instead of Centos 7. I used Centos 7 in my KVM cluster for about 2 year and there were many problems. Something ugly (buggy libqb) was fixed in 7.3 release, but still you have old kernel, old qemu and libvirt.

3.10.* kernels have problem with Debian Jessie guests 100% cpu steal time after live migration. So I was forced to use 4.* kernel from unofficial elrepo.org repository.

In fedora you have 4.* kernel, qemu 2.9, libvirt 3.2…

Hey,

you can use NM without any problems. We are even using it with lacp bonding in combination with 802.1q.

So far nothing to worry about.

Thanks for the responses.

The much longer version of why I was asking …

I have been following the Red Hat company line with CentOS 7 and using NM to do all the network configuration including LACP (with the bond and team drivers), VLANs and bridges. Doing so has easily been one of the most frustrating exercises that I have undertake for a very long time, but that is another story that doesn’t need telling here.

I have two compute/storage nodes set up with the DRBDManage driver and was working my way through testing when the DRBDManage cluster suddenly started having issues which am yet to fully resolve.

In the course of troubleshooting DRBDManage I noticed that on one of the nodes, NM had duplicate connections for 7 out of the 9 configured connections. By duplicate I mean connections with the same name and configuration except for the UUID. I found that there were all the original ifcfg file connections (with the original UUIDs) and a partial set of NM keyfile connections (with the new UUIDs)! The first indication of the keyfile files existing in the logs is when they were loaded after a reboot of the node. There is nothing to indicate why or when they were actually created. The keyfiles do not exist on another identically node in the cluster.

The only network configuration that has been done since the nodes were originally installed and set up a about a month ago has been done by the OpenNebula 802.1Q driver, which does nothing with NM. There are a lot (25 ish) of NM info log entries every time a VM is deployed with the 802.1Q driver. I view this a lot of noise and resource consumption for no purpose/benefit that I can see.

At this point I am seriously considering switching to using network.service and static ifcfg files. At least I know they aren’t going to change on their own on me.

I hadn’t thought about using Fedora for the nodes. I do use Fedora on my desktop, a couple of laptops and mythtv server. I have always used Centos for servers because its longer supported life between rebuilds, but my experiences with upgrading Fedora across releases has been good of late so that is now less of reason. There is certainly an attraction to having later kernels, qemu and libvirt so maybe at the next rebuild.

Thanks again for the responses.