VM not connecting neither pinging in VLAN setup

Hi all ! I’m new to Open Nebula in practial terms, but since 2020 been looking for testing it.

As a newbie in the subject, I’m facing some issues with my networking setup.

The #1 problem is when I create a VM and can’t ping it even from the node running it. I put this VM in VLAN10, which is the one I’m using in the node and is connected in Cisco switch.

So, here is my whole setup :

Cisco router 2901
Cisco switch 2960-S with VLAN’s configured
1x KVM node acting as a single-node front-end and host.
1x interface (enp0s25.10) used by Open Nebula as my tagged interface for VLAN
1x virbr0 bridge (I created another one, virbr1 just for standby use)
Using default security group for everything down the road

I did a bunch of research about what possible causes could be regarding my setup. So, I would like to systematically document all steps for troubleshooting, and once solved, give back to community creating a quick guide for networking setup and troubleshooting when it comes to VM’s in VLANS and etc, as in the future I plan to use netboot.xyz to provide PXE and HTTPS booting for other nodes, with Firecracker and Kubernetes in place.

So, finally, what points should I check-in ?

  1. Bridges
  2. Switches and router (Cisco) configuration
  3. tcpdump from interfaces and other components
  4. Virtual router from OpenNebula
  5. Firewall and iptables rules
  6. ARP and ICMP traffic in the node
  7. Outputs from OpenNebula compoents
    Please, describe the problem here and provide additional information below (if applicable) …

Versions of the related components and OS (frontend, hypervisors, VMs):
KVM single-node : Rocky Linux 8.7
OpenNebula 6.6

Steps to reproduce:
Create VM with VLAN associated to it, then try to ping it from any other host over the network.

Current results:
Nothing reaches the VM.

Expected results:
Ping and connect via SSH into VM.

Any other points ?

Cheers and thanks for any help.

Hello

In my opinion you should better delegate to OpenNebula the creation of the vnet bridge, so you should only configure the Virtual network to use the enp0s25 physical device.

OpenNebula should create automatically the VLAN 10 bridge on the necessary hosts (check it with ip link and ip add) attaching the NIC of the VM directly there. After that you can see if there is any traffic in that host with sudo tcpdump -i BRIDGE_IFACE and from there you can check as well if the packages arrive correctly to the router

Thanks for the reply @brunorro

It’s working now, but when trying to start VM’s to activate SSH, I get the following :

Sun Feb 11 15:28:27 2024: Cannot dispatch VM: No system datastore meets capacity and SCHED_DS_REQUIREMENTS: ("CLUSTERS/ID" @> 0) & (TM_MAD = "ssh")

Is this related to VM template or something like it ? Should I check templates via CLI to see how they’re configured to prevent this from happening in the future with other VM’s ?

Well, the previous error I was to solve by creating a system datastore for SSH.

But now, the next big thing is to configure templates to borrow some SSH keys, so I can log into it.

Hi again,

You can configure the SSH keys on the context section of the templates.
By default, if you have contextualization and the image supports it, you can add an SSH pubkey to a user and it will be added by default to all the templates instantiated to that user. In case you want to add more SSH pubkeys to the authorized hosts you can add all of them in the “SSH public key” field of the template context

Thanks again @brunorro !

In this case, as usual, if any user wants to connect to the VM, they must have the pub key, correct ? In case of a jump/bastion host, this would not be necessary, since the very same host has the key.

Hello, @pedroalvesbatista

In this case, the user should have the private key that is paired with that public key. If it is not his/her default public key, you can also log in with the private key using ssh -i privkey user@vm_ip or he/she can set it up on its ~/.ssh/config file (check examples on the internet and the man 5 ssh_config page)

Thank you!