Hi to all.
Can someone tell me how to use a VM for OpenNebula Controller?
Let’s assume a totally blank infrastructure, with some servers to be used as compute nodes in OpenNebula.
To use these servers, I need a working opennebula controller.
To make a working One Controller, I need at least a server for it or another hypervisor where to place the controller.
This is similiar to the chicken and egg dilemma. I would like to use a totally virtualized cloud, but I can’t create the cloud without the controller and I can’t create the controller without the cloud
Is a dedicated hardware really needed for OpenNebula controller?
So, did you install a bare-metal KVM node, manually created a VM to be used as opennebula controller and then added the same kvm node in opennebula ?
Is opennebula smart enough to detect the “wild vm” ram/cpu/disks usage when adding new VMs ?
Let me try to explain: if the whole node has 64GB RAM, the opennebula controller use 2GB, opennebula is smart enough to only use 62GB for the other VM or still try to allocate 64GB because it doesn’t detect “itself” ?
Hi, exactly no, because I have existing HW node for opennebula, so I migrate it to VM, but it works liku you write. You can create cluster of three nodes and run opennebula VM managed by pacemaker.
About resources - you can reserve RAM and CPU in opennebula host (compute node) detail page. Also, opennebula is not CPU demanding and you shod have big swap on compute node. For ex for 64GB ram you should make 8GB swap or more if you want overprovision…
If i understood properly, opennebula could be made HA by simply clustering the MySQL database
In case of failover, the new MySQL would be promoted to master and opennebula started on the survived server
If this is true, using ucarp would be easier than corosync/peacemaker