I recently attempted to build an OpenNebula cluster on KVM using HPE BL460s (which only have two ethernet interfaces - especially if you have FC HBAs). The two interfaces were bonded together for HA and bandwidth, and added to a bridge with multiple tagged VLANs hanging off them. Everything worked well. The KVM host was managed by adding it’s IP to the bridge.
Upon deleting the VMs and attempting to redeploy, it became apparent that the bridge is deleted when no VMs require it. This is good practice - but not when the management interface is also on that bridge!
As a work around, the bonded interfaces now terminate in a bridge, connected to a “management bridge” via Virtual wires (vEth). A second “VLAN” bridge is also connected via vEth. This VLAN bridge gets deleted when there are no more VMs. This also allows for a 3rd bridge with an IP address to be connected for encapsulating any vxlan traffic into it’s own vxtransit VLAN. Nice!
This is, however, a spaghetti of cobbled together components, and would be greatly simplified if OpenNebula could have some awareness of non-managed L3 interfaces in linux bridges, such as management and VTEPs etc. By contrast, vmware have the concept of the vmkernel - an object which sits on a vSwitch or vds_Switch for non-VM traffic such as vmotion, iSCSI and Management, preventing vSwitches and port groups from being deleted.
I think it would be great feature for OpenNebula to support more advanced host networking - even if it just means bridging bridges with vEth, at least this could be deployed and deleted dynamically as-and-when a host needs it.
I have a small mass of scripts and diagrams to share with anyone who wants (well…needs) to recreate this crazy setup