I had a running 5.12 cluster in my home lab and decided to upgrade/migrate it to 6.0.
Being an IT Admin for some 20 years I was able to migrate it easily enough, however, I have a slightly odd networking config.
I static route (next-hop) private range /24s to each host this may not be intended functionality but for my network topology, it worked the way I wanted.
Which was to have natively routable connectivity to different subnets than my main LAN.
So the setup was pretty simple.
router (172.16.0.1 → static route 192.168.3.0/24 → 172.16.0.50 (ONE kvm host)
KVM Host
eth0 172.16.0.50
virbr0 192.168.3.1
Virtual Network configured accordingly with the correct subnet netmask and gateway…
Now, this worked “OK” I was not in love with the setup because it meant that I had to manually route a /24 to each host and when creating VMs I had to manually specify which host and network to use… and I would have rathered it “figure it out” so to speak. wherein I can say this virtual network is bound to this host and this host only and only use this network if using this host. but as far as I can tell opennebula does not have a way to automatically lock hosts to specific networks.
Right so the main issue, is and I can’t really find anything in the logs that is telling me why this is happening…
But now, after I have upgraded to 6.0 every time a network gets attached to a VM, the host’s eth0 loses its IP and gateway it literally gets removed from eth0 and the host goes offline and I have to either reboot it or restart the network stack.
Interestingly if I restart the network stack (Down && Up) eth0 everything starts working, host is pingable VM is pingable. but if I reboot the VM or start a new one (rather I should say attach a network to a VM) because VMs with no network attached do not cause the problem… everything goes back down again eth0 on host loses it’s IP/route
So has something major changed in 6.0 that would cause this?