VM scheduler doesn't take virtual nets into consideration

Hello,

We have deployed ONE 5.6.1 on KVM hosts with network bridges and security groups. All of our hosts have a standard “internal” vnet on br0, and due to some resource constraints only a handful of hosts have an interface configured with the “external” vnet on br1. However when I provision a new VM with two vnics (one on the internal vnet and one on the external vnet) the scheduler does not take this requirement into account and deployment fails unless I explicitly select a host with the appropriate vnets configured.

This behavior is also exhibited during VM rescheduling. One potential way I can think of to get around this is to use host/cluster attribtues and SCHED_REQUIREMENTS on the guest Something like adding “EXTERNAL: TRUE” to hosts with external bridges and adding SCHED_REQUIREMENTS=“EXTERNAL = TRUE” in VMs with external vnics.

Does anybody else have a better idea to help ensure the deployment of VMs on hosts with heterogeneous network configurations? Or is this function addressed in the 5.8 release?

You need to setup clusters for this. Create a cluster that include the hosts that are compatible with these two vnets.

see here: http://docs.opennebula.org/5.8/operation/host_cluster_management/cluster_guide.html

@ruben Are you saying that I would need to create a separate cluster and add every host that is configured with both internal and external vnets/bridge interfaces into that cluster, and VMs that need both internal and external vnics would be instantiated on that cluster only?

Thanks,
Dave

So according to the linked cluster guide documentation it sounds like my previous comment is correct and after creating the new cluster, users that require VMs with an internal and/or external vnic would have to create the VM on that cluster.