Not understanding the Best Practice design/architecture for Front End hosts

I’m trying to set up OpenNebula from scratch, and I’m having a tough time understanding the Best Practice design/architecture when it comes to the Front End hosts. I have throughly reviewed all documentation. I desire to have the Front End hosts be HA, so my plans were to follow the HA documentation.

What I’m not understanding is where should those Front End hosts live?? We have very large hardware here, so it doesn’t make sense to waste resources and instal the Front End stuff on bare metal… so I was planning on setting them up as VMs.

It seems like if you want to go that route, you have to already have another VM infrastructure set up, outside of OpenNebula, in order to get started. Is that correct?

I was thinking that I could have an HA cluster of Front End VMs live on the actual OpenNebula hosts/cluster itself… but it seems you can’t necessarily do that because you have a chicken and egg situation.

What is the recommended architecture in this situation?

1 Like

Hello @ktrumbull

If you don’t want to use the whole server as frontend, you can use libvirt to virtualize your frontends. So for example, you can create three virtual machines with libvirt each one in one different server (so you use the HA) and then use the physical server as a host in OpenNebula.

I did consider that… doesn’t that sort of ‘hide’ those native KVM VMs underneath OpenNebula though? I’ve read some stuff about Importing Wilds… not 100% sure what implication that has yet… but if you’re going to import those ‘wild’ VMs, wouldn’t it make sense to spin them natively inside of OpenNebula from the start?

I’d be very curious to understand what most people do. Do most have a dedicated box for running a single frontend node? If so, how do you handle hardware redundancy? How would you handle the downtime of rebuilding or replacing that physical VM should it die (or a component dies).

You are going to have chicken and egg problem then.

I am running the frontend in a VM on one of the kvm hosts and the VMs demain XML is defined on some more hosts for fail tolerance. A simple service, which is an extension/addon/ to StorPool is used to manage the VM and it’s root disk that is on StorPool too. When on a planned host maintenance the VM is live-migrated to another host. In case of host failure the service take care to fence the “old” VM instance and start the VM on one of the predefined healthy hosts. The VM disk is periodically backed up on StorPool as volume snapshots and in addition a script in the VM is backing up the database.

Hope this helps.

Best Regards,
Anton Todorov

Yes, as was already mentioned, just bring up your ONe-Frontend-VMs with virsh/virt-manager on different of your (KVM) hosts, and if you want to use MySQL/MariaDB as backend, do so as well for your DB nodes :wink: Do consider where to store these VM images as well: if you have a shared storage system, you could use this, as long as it’s not depending on OpenNebula-managed VMs. We actually run the Frontend and Galera cluster VMs on the hosts’ local SSDs, autostarted when the host boots.

That’s exactly how I set up my front end: on a libvirt VM. It’s a simple set up. Most management tasks on the front end is either via the Sunstone web GUI or ssh. Libvirt.

@ktrumbull Hi. I agree with you very much. I am also looking for the feasibility of this structure. What is the effect of holding down the architecture deployment? Did it work as intended? Hope to get your reply, thank you very much.