New OpenNebula Community Docker Images Add-On

Greetings All!

We have been using OpenNebula for a while, we graduated to building our own RPMs and are migrating to the use of Docker images to run OpenNebula in our product.
However, we feel that it would be good to share the code to build these images with the community and perhaps get some images uploaded to Dockerhub as well.

Right now we have successfully built images for sunstone, node and other core parts. We have talked to the OpenNebula team and they think it’s a good idea to put this out in the world as an add-on.

The images are currently running using Docker-Compose but we use Kubernetes for most of our infrastructure so we will be looking to include Helm charts for these images in the future.

Please give us a shout if you’re interested and want to be involved. Be warned that testing and using these images will be fairly involved so this is not something to try if you’ve not used Docker before.

As far as architecture goes, we have tested using our own setup which consists of a 3-node cluster with a VIP for access to opennebula services, using pacemaker to handle service start/stop based on cluster availability. We use GlusterFS over a set of disks to provide replicated storage and Heketi to provide a RESTful API with which to configure the storage volumes. Openvswitch is used for the L2 networking.

All the best
Stephen

6 Likes

Hi, everything already made :slight_smile:

Let’s cooperate then!

We’re running OpenNebula inside Kubernetes more than one year already, it is working fine in production:

Current scheme:

Helm chart is also available on Helm Hub:

https://hub.helm.sh/charts/kvaps/opennebula

1 Like

Hey Kvaps - of course we saw your code when we started this and it inspired us. We didn’t realise that your were still maintaining this for the 5.12 releases. On our side we’re building everything from source (as we have an internal mandate to do so) and that was one of our main drivers, but we are also looking to include:

  • LLDPD in container to show which switch port the server is connected to
  • OVSDB and client binaries in containers to manage virtual switching
  • GlusterFS and Heketi components able to integrate for storage services (in different repo)
1 Like

I had no time to prepare 5.12 upgrade yet, but it is planned.
All the images are built the same way from source code as well.
I agree that closed migrations and packages repository might delay the new release, but the current 5.10.8 is working quite well for now.

kvaps I have been very inspired by your great work here. I had decided to do the initial work here in centos based VM’s because that’s what we were using elsewhere. We have graduated to an “allinone” container that uses different commands to control the entrypoint. The 5.12 version can be found at the following location. As my colleague stated we are also working on different containers including some prometheus, grafana, influxdb etc. We’ll get those into a separate repo as we feel comofortable with functionality. I look forward to collaborating with you here.

2 Likes

Sure, let’s move the development to neutral place, eg. we can have common project like github.com/opennebula/addon-kube-opennebula, or two separated eg github.com/addon-docker-images and github.com/opennebula/addon-helm-chart.

Not sure, if you would like to save the compatibility but kube-opennebula project might be the start point.

Another thing is that you prefer RPM-based distros, but I am the DEB-based.
This makes no much sense for the docker, but any way I think we should choose the single distro for better consolidation on the project.
I’m fine with the both of them if it works.

The second thing which we have to think about is the maintenance. We can choose between various strategies for the management.

Starting from the full independency, where all the development going via Github PR’s and both sides should approve them before getting merged, or to use separate zones of influence, eg. you’re maintaining docker images but I am the helm chart.

What is your interest on this project, and are you interesting to provide the support for the future releases?


One problematic part to me is closing migrators to new versions, one of the project goals was to provide the painless updates for the kube-opennebula users.

Update from 5.10 to the current release 5.12 can be performed with the migrators removed in this commit, they were published under Apache2 license, so still can be used:

If the next release will going with the closed migrators, I will have no interest for the future support of the project.

1 Like

Kvaps,

All great points/considerations. I am sure we will iron this out over time. I will need to get more input from our team here. We are still in sort of a discovery phase here. Also we may have different use cases. In our case we are using opennebula to manage kvm virtual machines on a 3 node cluster in sort of a hyperconverged manner. We are using glusterfs for storage in replica 3 and heketi to provide software defined storage. We are using openvswitch for our networking at the moment. We are running k8s on top virtual machines hosted in the opennebula cluster. We have a fully automated installation process to bare metal via ansible. Our goal here is to move from an opennebula bare metal installation to a container based installation. We believe this will ease software updates. I assume this is the same reason that you have placed opennebula into containers as well? The whole system is a controller to a vmware based cloud(I think of it as the under cloud in openstack).

I personally don’t have any issue using Debian/Ubuntu based containers. We went with centos because the rest of our containers and the underlying os is centos. Also our automation process is built around Centos/RPMs. On this side we will have to get that decision from the rest of our team. I don’t think it will be an issue as long as we can prove it works.

https://www.forbes.com/sites/moorinsights/2019/11/15/dell-emc-powerone-is-the-future-of-autonomous-infrastructure-here/#1e908cb54bf3

Can you provide us with more insight on your use case?

1 Like

Well, I have to say that all this sounds amazing! :nerd_face: @howels & @wtownse, thank you again for getting the ball rolling, what you guys are doing at Dell EMC with OpenNebula and PowerOne looks incredibly interesting. Looking forward to seeing where all this brings us!

We created this add-on repo a few days ago, @kvaps: https://github.com/OpenNebula/addon-opennebula-containers You should have write access already.

Once the basic info is ready, we can add the initiative to the Add-on Catalog and launch a broader call for contributors, etc.

Well, the policy we have in place now implies that access to the CE migration packages for versions prior the latest one are publicly available, so in practical terms it only introduces a delay to the whole process.

@amarti would you consider Open Sourcing the migrators for DB updates? We are also looking to build from Source and without these DB migrators an upgrade appears impossible using source-built binaries.
When we spoke to you and the team you said that access to build packages and this migrator code would be possible but now it seems that is not the case. Please can you clarify this? I thought we had an agreement when we agreed to contribute code that you would allow access to migrators and build code for this project.

Hi @howels! Sorry for the misunderstanding, I thought the path forward was clear. Yes, the CE migrators for 5.12 will be open-sourced under Apache License 2.0 after the next major/minor release of OpenNebula. In the meantime—as a company that’s planning to actively contribute to the project—you guys can get access to the CE migrators through https://opennebula.io/get-migration/ if you need them for internal use or testing purposes :+1: