Local and remote hosts

Hi all,

This is for one 5.2.1

As far as I understood it, there’s

  • sunstone, the management UI
  • hosts/nodes which run the VMs
    and you can have a cloud which means “a bunch of computers, with various hardware in a group, called cloud”.

So I played around with opennebula and really liked it.
Fact of the matter is I have 3 servers and I’d like to move them into 1 big server.
And I have a small local server and a larger local server (on the local network)

On the small server I have stuff like named and dhcpd running.
nuc.luketic - small intel nuc, where I installed sunstone. It’s always on and perfect for that job.
stack.luketic (l1) - opennebula node (which went from openstack to ovirt to proxmox to now one)
^ that’s the local network (and a workstation and laptop but they’re not used for nebula) behind a IPv4 NAT router and also a /56 IPv6 network.

Then there are 3 servers in a datacenter all with public IPs
r1 - a gentoo box running PHP and Go projects
r2 - a centos7 box used for webhosting, barely used 36g
r3 - a high traffic archlinux setup running only 1 domain

So reading the docs

You should verify that connecting from the Front-end, as user oneadmin, to the nodes and the Front-end itself, and from the nodes to the Front-end, does not ask password:

and further down

Remember that this is only required in the Hosts, not in the Front-end. Also remember that it is not important the exact name of the resources (br0, br1, etc…), however it’s important that the bridges and NICs have the same name in all the Hosts.

I think there might be a problem as the remote servers don’t know the hostnames of the local node or sunstone server.

  • Is that the case (I don’t want to “pollute” the filesystem by trying it on a production system) and if it is how can I work around that?

  • Does the host actually need to access the sunstone UI’s server at all?
    That’s the main question, but have more.

  • How would I create VMs out of the running servers? Are there tools?

  • All machines have different hardware and nic names. Will there be problems with live migration?

My idea was create images out of the running servers, move them to the local node (because it has enough room to hold all 3 servers), setup one nodes on the servers and migrate the VMs to the servers.
Basically I want to virtualize my “bare-metal” servers so I can quickly move the VMs around once hardware gets outdated, eventually even consolidating them on one big server.
However I would still like to use the local node afterwards for development purposes and once a VM is done move it to live servers.

Did I ask the wrong questions?

Ok I believe solved this with .ssh/config
and forwarding ports to ssh in the router’s firewall
Host stack.luketic
Hostname my_external_hostname
Port 12345

So there’s just the question of creating images from live running “bare metal” servers left.

And since I have only 1 external IP / server. I’ll just use virbr0 for the network
to /etc/resolv.conf


ip r add default via

and on the host I can run nginx with per domain forwarding, which also works for MTAs.

I just don’t know, should IPv4 ever be really obsolete, how to configure my IPv6 prefix
It always gives me errors.

I put the local host in the default cluster, renamed to local
and the remote host in the newly created remote cluster.
Then tried live migrate but

[VirtualMachineMigrate] Cannot migrate to host [1]. Host is in cluster [100], and VM requires to be placed on cluster [0]

Shouldn’t this be possible? Idk why there are custers when you can’t migrate from 1 cluster to the next.
It should be a “Are you sure you want to migrate to a different cluster? [y/n]” dialog

I assume the image gets copied to destination and ran then the source is powered off and deleted.
Sure it might take a while depending on connectivity, but I don’t see a reason why it shouldn’t.

Also, if they’re placed in the same cluster I have no means to tell on which host the instance should be created.

Oh well, so much for that