KVM/ONe Improved host network control

Hi!

I recently attempted to build an OpenNebula cluster on KVM using HPE BL460s (which only have two ethernet interfaces - especially if you have FC HBAs). The two interfaces were bonded together for HA and bandwidth, and added to a bridge with multiple tagged VLANs hanging off them. Everything worked well. The KVM host was managed by adding it’s IP to the bridge.

Upon deleting the VMs and attempting to redeploy, it became apparent that the bridge is deleted when no VMs require it. This is good practice - but not when the management interface is also on that bridge! :slight_smile:

As a work around, the bonded interfaces now terminate in a bridge, connected to a “management bridge” via Virtual wires (vEth). A second “VLAN” bridge is also connected via vEth. This VLAN bridge gets deleted when there are no more VMs. This also allows for a 3rd bridge with an IP address to be connected for encapsulating any vxlan traffic into it’s own vxtransit VLAN. Nice!

This is, however, a spaghetti of cobbled together components, and would be greatly simplified if OpenNebula could have some awareness of non-managed L3 interfaces in linux bridges, such as management and VTEPs etc. By contrast, vmware have the concept of the vmkernel - an object which sits on a vSwitch or vds_Switch for non-VM traffic such as vmotion, iSCSI and Management, preventing vSwitches and port groups from being deleted.

I think it would be great feature for OpenNebula to support more advanced host networking - even if it just means bridging bridges with vEth, at least this could be deployed and deleted dynamically as-and-when a host needs it.

I have a small mass of scripts and diagrams to share with anyone who wants (well…needs) to recreate this crazy setup :crazy_face:

You could try setting :keep_empty_bridge: true in /var/lib/one/remotes/etc/vnm/OpenNebulaNetwork.conf.

(and as it is a file in “remotes” - I’d re-sync the hosts with su - oneadmin -c 'onehost sync --force' after editing the file)

Best Regards,
Anton Todorov

Thanks Anton - I have ended up doing exactly that where it’s only a management IP required - but it’s insufficient if more complex networking (e.g. nesting vxlans inside a vlan) is required. I suppose I am really just looking for solutions to more granular management of the host networking. Some configuration can be performed when the nodes are built (e.g. cobbler or digital rebar) or an orchestration tool (ansible?) but it would just be fantastic if this level of control could be provided as part of on-boarding a host to OpenNebula(for example) or better still, applied dynamically across the cluster, ideally from Sunstone. :slight_smile:

Hi, have you tried to replace the linux bridge with openvswitch bridge ?
you may have the vmware behavior.