LXDoNe managing Azure LXD host?


Have anybody ever tried to have an Azure VM set as a LXD node controlled by LXDoNe?

I’m specially interested in the vNet/LXD/containers network setup: how to have a container created by OpenNebula able to talk to the Internet and not being restricted to its parent LXD host…

So far I have it working to a point were I can create the container on the Azure VM, but Azure is not allowing the container to talk to anybody else other than the LXD host or other containers on the same host.



Maybe you’ll have to mask the container with the host network identity. I ran into some issues regarding virtual instances networking in an AWS setup. You can try to read the networking sections in https://opennebula.org/building-an-opennebula-private-cloud-on-aws-bare-metal/. The main problem for me was the filtering made by AWS networking in the layer 2 network.

This is not related to LXDoNe, as it will happen with every virtual instance you deploy, because they will have a separate IP address from the host, and another MAC address, and AWS/Azure are not aware of this.

Hi Daniel,

A bit of good news about this: our containers created on Azure do can talk to our enterprise network and the Internet! (I haven’t played yet on how to make them accessible from our enterprise, tough.)

What I did (in a nutshell):

  • Followed LXDoNe’s configuration guide to the line.
    => With two exceptions:
    * I did not removed “eth0” from LXD’s profile.
    * I run “lxd init” interactively, to make sure NAT would be enabled.
  • After running “lxd init”, I collected the information regarding the internal network LXD automatically created for the containers.
  • On OpenNebula, created a bridged Virtual Network with LXD’s network information and set “lxdbr0” as the bridge interface.
  • Something important/unique tough: our enterprise has a VPN to Azure. The LXD host/node’s IP address is in that VPN.
  • In this case, NAT is being our best friend. (Azure does block pure bridged traffic even when it goes through our VPN.)

This idea is allowing even nested containers to talk to the Internet or our internal network.

I think I’ll just have to set some static routes in order to enable our internal hosts to talk to the containers. I’ll post the result later.


Nice to know you made such progress. Are you using a several lxd nodes, that is when it gets really tricky? You can make it more interesting if you deploy opennebula inside a container and make that container able to talk to all of the nodes. Due to the traffic filtering made by the cloud provider, I used VXLAN tunnels. Bridges worked just for the single node scenario.

In order to make containers publicly available you’ll have to play with the DNAT rules on the lxd nodes, exploiting the public IPs assigned to thenodes, again there are examples in https://opennebula.org/building-an-opennebula-private-cloud-on-aws-bare-metal/