Can't ping/ssh VM

Hi everyone! I’m new to OpenNebula and still struggling to run my first VM

I’m able to run a VM and attach a private network to it. VM has status “running”, NIC is attached, IP is assigned, no errors in logs. However, I can’t ping/ssh my VM. Pls advice

Environment:
Host: Ubuntu 24.04
Hyper: KVM
OpenNebula 6.10.0-1
Guest Image : Alpine 3.20 or Ubuntu 22.04 Minimal

// Available bridges

bridge name     bridge id               STP enabled     interfaces
br0             8000.e283e4cf86e7       no               enp1s0f0
br1             8000.aeba22226b3c       no              enp1s0f1
                                                                              one-7-0
virbr0          8000.52540014962e       yes             vnet5

// Private bridge config

25: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:ba:22:22:6b:3c brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global noprefixroute br1
       valid_lft forever preferred_lft forever
    inet6 fe80::acba:22ff:fe22:6b3c/64 scope link 
       valid_lft forever preferred_lft forever

// NIC config

NAME        = "private"
DESCRIPTION = "Private network"

# Use existing bridge
BRIDGE = br1
VN_MAD="bridge"

# Context attributes
NETWORK_ADDRESS = "192.168.1.0"
GATEWAY         = "192.168.1.1"
NETWORK_MASK    = "255.255.255.0"
DNS             = "8.8.8.8"
SEARCH_DOMAIN   = "my.local"

#Address Ranges, only these addresses will be assigned to the VMs
AR=[TYPE = "IP4", IP = "192.168.1.100", SIZE = "100" ]

Any help is greatly appreciated!

I’m able to run a native KVM VM, which gets an IP from two bridges: default KVM bridge virbr0 and my private bridge br1.

I’m able to ping/ssh a KVM native VM, but can’t access OpenNebula VM on the same network.

One thing I noticed is that OpenNebula spawns a KVM VM with a different network setup:

// KVM native VM network setup

    <interface type='network'>
      <mac address='52:54:00:00:00:32'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

// OpenNebula VM network interface

    <interface type='bridge'>
      <mac address='02:00:c0:a8:01:64'/>
      <source bridge='br1'/>
      <target dev='one-7-0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

So we see that one creates a bridge interface over bridge and it doesn’t work. Native KVM VM creates a network interface over a default KVM bridge and this works.

Any clues how to debug this :roll_eyes: Thank you in advance !

Hello,

there is example that works in our enviroment

BRIDGE = "pub-br"
BRIDGE_TYPE = "linux"
DESCRIPTION = "Public IPv4 network"
DNS = "8.8.8.8 8.8.8.8 77.88.8.8"
FILTER_IP_SPOOFING = "YES"
FILTER_MAC_SPOOFING = "YES"
INBOUND_AVG_BW = "12500"
METHOD = "static"
OUTBOUND_AVG_BW = "25000"
OUTER_VLAN_ID = ""
PHYDEV = "pub-bond"
SECURITY_GROUPS = "0"
VLAN_ID = ""
VN_MAD = "fw"```

Add IP range in the pool and VM should get it.

Thanks for sharing this!

However, I’m still getting the same problem :roll_eyes:
Could you pls :pray: share your networking settings with ip a s and ip r.

Thanks a lot for your help!

There is the node network-scripts settings. Bonding is used.

DEVICE=ens3f0
ONBOOT=yes
MASTER=public-bond
SLAVE=yes
USERCTL=no
TYPE=Ethernet
NAME="ens3f0"
UUID=f72a9423-6976-4ad7-8ac4-cb0b03406c38

DEVICE=enp129s0f0
ONBOOT=yes
MASTER=public-bond
SLAVE=yes
USERCTL=no
TYPE=Ethernet
NAME="enp129s0f0"
UUID=f33816ca-305c-45d3-8dce-eb109b507fba

DEVICE=pub-bond
ONBOOT=yes
TYPE=Bond
BRIDGE=public-bridge
BONDING_OPTS="mode=802.3ad miimon=100"
BONDING_MASTER=yes
HWADDR=
NAME="Bond"
UUID=5e0d7b92-d8f5-405e-a522-9e9201c5764f

STP=yes
BRIDGING_OPTS=priority=32768
TYPE=Bridge
HWADDR=
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=default
NAME=bridge-br1
UUID=1b5b5d59-e933-4a5b-9d9d-f08aeab9d83d
DEVICE=public-bridge
ONBOOT=yes
IPADDR=$IP
PREFIX=24
GATEWAY=$GATEWAY
DNS1=8.8.8.8

Hello,

Let’s summarise what the setup is:

  • A single KVM host configured by the OS bridge br1 set on enp1s0f1 interface, and IP 192.168.1.1/24 set on the br1.
  • A VNET configured in OpenNebula with VN_MAD=bridge, BRIDGE=br1, and PHYDEV not defined. With single AR with starting IP 192.168.1.100 and 100 addresses.
  • A VM is started on the KVM host with an assigned NIC. (assuming the given IP is 192.168.1.100)

Here are some ideas what to check

  1. is the NIC in the VM configured? In the VM we have the IP 192.168.1.100 configured? what is the output of ip -4 addr list
  2. is the bridge br1 still with IP 192.168.1.1 on the host? What is the output of ip -4 addr list
  3. is the VM interface one-{VM_ID}-{NIC_ID} associated with br1 on the host? What is the output of bridge link show, ip link list (here br1 should have two interfaces assigned one-VM_ID-NIC_ID and enp1s0f1

If the NIC in the VM is configured with IP 192.168.1.100/24, br1 on the host is configured with 192.168.1.1/24 and br1 has one-VM_ID-NIC_ID associated, a ping from the host to 192.168.1.100 should succeed…

Best Regards,
Anton Todorov