I’m having issues while trying to connect to a newly created vm. The idea is to being able to ping the vm and after that setting up the ssh access via contextualization.
My setup is:
Front and node Centos7
onetemplate show 0
onevnet show 0
onevm show 0
Any help would be appreciated!!
You can not ping the VM because it is in network 10.0.0.0/24 and I believe that virbr0 by default is 192.168.122.0/24.
Hi jfontan and thanks for your help.
I’ve changed the network settings accordingly to:
onevnet show 0
then create the vm as:
onevm create --name vm-test --cpu 2 --vcpu 2 --arch x86_64 --memory 2048 --disk ‘Ubuntu 16.04’ --nic ‘private’ --ssh ~/.ssh/id_rsa.pub --context NETWORK=YES
then I check vm is running and config is all ok
onevm show 4
Unfortunately I’m not still able to ping it or to access via ssh
Any tips guys?
From where you are trying to ping the VMs?
With this setup you should be able to ping and connect to them from the HV node on which they are running.
Accessing the instance over VNC usually helps to diagnose problems like unable to ping or unable to boot.
As @marcindulak says the best approach will be accessing the VM with VNC. Also check the IP of virbr0 and try to ping/connect from the same node as @atodorov_storpool suggests. The bridge created by libvirt is not accesible from other nodes.
Hi @atodorov_storpool, @marcindulak and @jfontan.
Thanks for your help guys.
Will reply inline
— With this setup you should be able to ping and connect to them from the HV node on which they are running.
Yes you are right, I can ping the newly created vm from within the HV node.
— Accessing the instance over VNC …
I’ve provisioned the opennebula stack (front and node) within a private cluster. This implies that I have to ssh into a bastion in order to have access to the latter resources. At this point It’d a time consuming task to setup a tunnel or a NAT or a port forwarding to being able to VNC from outside the private cluster network. So the VNC option is discarded … sorry
— …The bridge created by libvirt is not accesible from other nodes.
This image describes the topology layout and the network setup
As I can ping/ssh now (from the HV node to the vm), the next step will be to being able to access the newly created vm from the proxy host. How can I make the bridge created by libvirt accesible from the proxy???
Many thanks guys!
I finally solved this issue by:
- Changing the topology of my network to:
- Added a route from proxy to node as:
ip route add 192.168.122.0/24 via 192.168.0.2 dev eth01
- Enable por forwarding in the node
sysctl -w net.ipv4.ip_forward=1
- Refreshing iptables as one installation adds reject rules to iptables