Hi all,
This is for one 5.2.1
As far as I understood it, there’s
- sunstone, the management UI
- hosts/nodes which run the VMs
and you can have a cloud which means “a bunch of computers, with various hardware in a group, called cloud”.
So I played around with opennebula and really liked it.
Fact of the matter is I have 3 servers and I’d like to move them into 1 big server.
And I have a small local server and a larger local server (on the local network)
On the small server I have stuff like named and dhcpd running.
nuc.luketic - small intel nuc, where I installed sunstone. It’s always on and perfect for that job.
stack.luketic (l1) - opennebula node (which went from openstack to ovirt to proxmox to now one)
^ that’s the local network (and a workstation and laptop but they’re not used for nebula) behind a IPv4 NAT router and also a /56 IPv6 network.
Then there are 3 servers in a datacenter all with public IPs
r1 - a gentoo box running PHP and Go projects
r2 - a centos7 box used for webhosting, barely used 36g
r3 - a high traffic archlinux setup running only 1 domain
So reading the docs
http://docs.opennebula.org/5.2/deployment/node_installation/kvm_node_installation.html#step-4-configure-passwordless-ssh
You should verify that connecting from the Front-end, as user oneadmin, to the nodes and the Front-end itself, and from the nodes to the Front-end, does not ask password:
and further down
http://docs.opennebula.org/5.2/deployment/node_installation/kvm_node_installation.html#step-5-networking-configuration
Remember that this is only required in the Hosts, not in the Front-end. Also remember that it is not important the exact name of the resources (br0, br1, etc…), however it’s important that the bridges and NICs have the same name in all the Hosts.
I think there might be a problem as the remote servers don’t know the hostnames of the local node or sunstone server.
-
Is that the case (I don’t want to “pollute” the filesystem by trying it on a production system) and if it is how can I work around that?
-
Does the host actually need to access the sunstone UI’s server at all?
That’s the main question, but have more. -
How would I create VMs out of the running servers? Are there tools?
-
All machines have different hardware and nic names. Will there be problems with live migration?
My idea was create images out of the running servers, move them to the local node (because it has enough room to hold all 3 servers), setup one nodes on the servers and migrate the VMs to the servers.
Basically I want to virtualize my “bare-metal” servers so I can quickly move the VMs around once hardware gets outdated, eventually even consolidating them on one big server.
However I would still like to use the local node afterwards for development purposes and once a VM is done move it to live servers.