Balancing+failover for oned. could we run multiple instances at the same time?

we’re compared virtualization management software and decided to start migration to opennebula.
right now i am testing balancing/failover capability of opennebula system.

my question/concern - should ‘oned’ have only one copy running? that is what kind of behavior balancing/failover guides cover.
could we make things easier and make few instances of oned run and then balance requests over them (stick user to one oned instance?) ?

thank you :wink:

up? feels like that feature is important.

im sorry but that guide was the first thing i checked.

it describes setup of frontend where you have to start and stop opennebula services using pacemaker+corosync in case currently running one goes down.
it will increase complexity and failover capabilities would be different from what i want to get.

what i want to know is that if it possible to keep all frontend instances up and connected to mysql at the same time (if it won’t screw things up). i can stick user to use one of frontend instances.

for example 4x hosts in different physical locations (different racks, datacenters, datacenter floors) running sunstone + oned connected to galera cluster and have nginx/tengine in front of them/anything else that can stick user session to one upstream.

the point is to keep multiple instances of oned up all the time. is it possible without screwing things up?

I think you need Galera cluster + HA proxy

sure, but would that be safe to have multiple instances of oned running?
what if users gonna start using different oned instances due to balancing reasons?

how could situation gonna be solved when different oned instances try to create virtual machines or do any other changes?
i have tried to use it in testing environment and due to occasional factors - if i get to create new machine at the two or more oned instances at the same time - it creates it with same id/ip/etc creating collisions. it is on last opennebula release (5.2.1).

You’re right, I did not think about it …

still want to know if we can make this work.