Running opennebula as VM and import it afterwards in itself

FIRST: GREAT WORK ON THE 4.14 BRANCH SO FAR!!!


In the past several user wanted to import existing VMs.
http://lists.opennebula.org/pipermail/users-opennebula.org/2011-August/016244.html
At this time it was a pain, I by myself never managed it.

Since it will now be possible in Opennebula 4.14 to import VMs,
http://dev.opennebula.org/issues/3292
it would be great to to extend the official docs here:


Without HA:

1.) Install everything what’s necessary for the KVM Hypervisor on a Ubuntu /CentOS7 host.

2.) Fire up a VM and install Opennebula in it.

3.) Import this VM into the running Opennebula instance.


With HA:

1.) Install everything what’s necessary for the KVM Hypervisor on two different Ubuntu /CentOS7 hosts.

2.) Fire up the two VMs and install Opennebula and MariaDB.

3.) Import this VM into the running Opennebula instance.

4.) Setting up the HA configuration.

Thanks,
Klaus

Kk Klose forum@opennebula.org writes:

Without HA:

1.) Install everything what’s necessary for the KVM Hypervisor on a Ubuntu /CentOS7 host.

2.) Fire up a VM and install Opennebula in it.

3.) Import this VM into the running Opennebula instance.


With HA:

1.) Install everything what’s necessary for the KVM Hypervisor on two different Ubuntu /CentOS7 hosts.

2.) Fire up the two VMs and install Opennebula and MariaDB.

3.) Import this VM into the running Opennebula instance.

4.) Setting up the HA configuration.

What is the point of managing the OpenNebula frontend VM with itself?

When the VM has a problem, you can do nothing, no CLI since the daemon
can be off and no hook for HA.

Regards.

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
signature.asc (342 Bytes)

a.)
This wouldn’t allow, for instance, to live-migrate the frontend to another host.

b.)
What “worst case” could happen doing it this way?
http://dev.opennebula.org/issues/3292#note-1
You can spin up another frontend any time as long the database can be reached, but that is also a problem of sunstone direct on the host.

Hi,

Thanks for the suggestion, we’ll consider it.
I opened a ticket to remember to do it before the final release http://dev.opennebula.org/issues/3908

@cmartin

Great news, I’m sure especially for HA installations this could be the preferred installation method from now. -
When https://github.com/OpenNebula/addon-lxcone or a similar way for container visualization makes it into the core even the actual visualization layer is no more needed.

That’s Opennebula 5.x for me.

Haven’t used


in production yet.

Is it ready?

6 month ago:
“Target version changed from Release 4.14 to Release 5.0”

Is this still valid?
Thanks

What is benefit of this? I have Opennebula in VM hsoted on cluster, but I manage that VM by pacemaker. I also plan to do HA with two VM and configure pacemaker placement to have each VM on another host.

When I need migrate frontend to other host, I do it by pacemaker. Also, when I need to put some cluster node to standby, it automatically migrate running opennebula VM to other suitable host and only then it stops other resources like mouted storage, dlm etc…

I would like to do something similiar
What i would like to achieve is a fully virtualized cloud infrastructure

If i understood properly, you manually created two VM for opennebula-frontend on you “compute” nodes.
These two vm are managed by pacemaker for scheduling and migrations

After that, from the newly installed opennebula you added the same “compute” node as hosts to be managed by opennebula

In this case you share the same hosts between the opennebula VM and VMs created/managed by opennebula itself

Right?
Any drawbacks? I would like to avoid using dedicated hosts only for opennebula frontend

Hi, you are right. I don’t see any drawback, it is working good. Also you can manage resource inside VMs by pacemaker remote.

In opennebula itself, I can see VMs on Wild tab in Host detail. Thats all.

When I need to do maintanace, I just put node to standby like before. Pacemaker automatically migrate running VM to other node.

I think that is better that self-managing openenbula VMs by opennebula.

Actually, I have just one opennebula VM, but I planning to do MariaDB cluster from three VMs and them add second VM for openenbula. All managed by pacemaker and pacemaker remote.