Since it will now be possible in Opennebula 4.14 to import VMs, http://dev.opennebula.org/issues/3292
it would be great to to extend the official docs here:
Without HA:
1.) Install everything what’s necessary for the KVM Hypervisor on a Ubuntu /CentOS7 host.
2.) Fire up a VM and install Opennebula in it.
3.) Import this VM into the running Opennebula instance.
With HA:
1.) Install everything what’s necessary for the KVM Hypervisor on two different Ubuntu /CentOS7 hosts.
2.) Fire up the two VMs and install Opennebula and MariaDB.
3.) Import this VM into the running Opennebula instance.
Great news, I’m sure especially for HA installations this could be the preferred installation method from now. -
When https://github.com/OpenNebula/addon-lxcone or a similar way for container visualization makes it into the core even the actual visualization layer is no more needed.
What is benefit of this? I have Opennebula in VM hsoted on cluster, but I manage that VM by pacemaker. I also plan to do HA with two VM and configure pacemaker placement to have each VM on another host.
When I need migrate frontend to other host, I do it by pacemaker. Also, when I need to put some cluster node to standby, it automatically migrate running opennebula VM to other suitable host and only then it stops other resources like mouted storage, dlm etc…
I would like to do something similiar
What i would like to achieve is a fully virtualized cloud infrastructure
If i understood properly, you manually created two VM for opennebula-frontend on you “compute” nodes.
These two vm are managed by pacemaker for scheduling and migrations
After that, from the newly installed opennebula you added the same “compute” node as hosts to be managed by opennebula
In this case you share the same hosts between the opennebula VM and VMs created/managed by opennebula itself
Right?
Any drawbacks? I would like to avoid using dedicated hosts only for opennebula frontend
Hi, you are right. I don’t see any drawback, it is working good. Also you can manage resource inside VMs by pacemaker remote.
In opennebula itself, I can see VMs on Wild tab in Host detail. Thats all.
When I need to do maintanace, I just put node to standby like before. Pacemaker automatically migrate running VM to other node.
I think that is better that self-managing openenbula VMs by opennebula.
Actually, I have just one opennebula VM, but I planning to do MariaDB cluster from three VMs and them add second VM for openenbula. All managed by pacemaker and pacemaker remote.