I feel like this question hasn’t quite been asked and answered or at least not in a way that my tired brain has understood! I want to understand how VMs in a single vnet/subnet can communicate weh on different hosts, either in the simpler single physical (3rd party) DC scenario and across different DCs regardless of provider or geographic location. NB if the answers are very different I’ll happily split my question.
Right now the setup in test has a dedicated Sunstone machine and 3 hosts in a single cluster. All 4 machines are in the same (OVH) DC. Creating a VM happily deploys to any host - but the VMs can then only communicate with VMs on the same host although they have the same vnet. This is much the same as if I just brought up 3 kvm hosts and configured them with identical subnets, but the implication of Sunstone deploying the VMs to any host is that they should be able to communicate with each other as if in a single L2 network.
So the specific questions are:
(1) should VMs on single vnet/subnet but on different hosts be able to communicate with each other?
(2a) if yes, how is that supposed to work? I understand the answer may be very different in the 2 scenarios, happy to focus on the simpler scenario.
(2b) what have I missed in setting up the environment?
All input welcome!
Please, describe the problem here and provide additional information below (if applicable) …
Versions of the related components and OS (frontend, hypervisors, VMs):
VMs attached to the same Virtual Network should be able to communicate across hosts. OpenNebula’s Virtual Networks define how VM NICs are connected (bridges, VLANs, overlays, etc.), but the actual connectivity between hosts depends on the underlying physical or overlay network being configured consistently on all nodes.
If VMs on the same host can talk, but VMs on different hosts can’t, that usually means the hosts aren’t joined in a common L2 domain for that network — e.g. the Linux bridge or VLAN isn’t trunked or present on all hosts. OpenNebula itself doesn’t magically extend L2 across hosts; it configures the bridges/drivers and relies on the physical network to carry the traffic.
Some information that could be useful from the documentation:
I think I was expecting some magic and it had been niggling at me for weeks! Which is odd because I’ve been making networks and Internetworks for a solid couple of decades so should know it might be an art but it’s not magic…
Basically we had it set up so OpenNebula was defining the bridges etc. but as the hosts were assorted hardware it wasn’t consistently creating the bridges on the right interfaces (OVH vrack has one interface for inter server networking and one for host primary IP). So we’ve gone back on the hosts and named the interfaces consistently, added consistently named VLANs at the host level to separate traffic on the vrack interface and then used those named devices as the “PHYDEV” in vnet config. So far so good, will deploy another host to test the approach.