Question about Bridged Networks

Hello, I have a cluster with two nodes, one master node containing all OpenNebula services that’s also configured as a KVM node, and another machine containing only the KVM services. I’m in doubt when configuring the network services for both nodes. In the Master node I have two network interfaces, In one of them a set-up a bridge called nebula0 as show here:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master nebula0 state UP group default qlen 1000
    link/ether f4:8e:38:e0:41:79 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f4:8e:38:e0:41:7a brd ff:ff:ff:ff:ff:ff
    inet 10.2.250.14/24 brd 10.2.250.255 scope global noprefixroute eno2
       valid_lft forever preferred_lft forever
    inet6 fe80::f68e:38ff:fee0:417a/64 scope link
       valid_lft forever preferred_lft forever
4: nebula0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f4:8e:38:e0:41:79 brd ff:ff:ff:ff:ff:ff
    inet 10.2.250.13/24 brd 10.2.250.255 scope global noprefixroute nebula0
       valid_lft forever preferred_lft forever
    inet6 fe80::f68e:38ff:fee0:4179/64 scope link
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d0:cf:c6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

In the other node, called NODE1, I didn’t make any network configuration so its interfaces are:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 84:7b:eb:e4:f4:32 brd ff:ff:ff:ff:ff:ff
    inet 10.2.250.22/24 brd 10.2.250.255 scope global dynamic eno1
       valid_lft 69466sec preferred_lft 69466sec
    inet6 fe80::867b:ebff:fee4:f432/64 scope link
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:0a:81:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:0a:81:63 brd ff:ff:ff:ff:ff:ff

I’ve created one VNET using Sunstone, that is configured as follows?

root@NEBULACLUSTER-MASTER-MACAE-ED-COMPANY:/etc/netplan# onevnet list
  ID USER     GROUP    NAME                                                                                                                                      CLUSTERS   BRIDGE                                                                    LEASES
   9 alessand oneadmin Private VNET                                                                                                                              100        nebula0                                                                        2
root@NEBULACLUSTER-MASTER-MACAE-ED-COMPANY:/etc/netplan# onevnet show 9
VIRTUAL NETWORK 9 INFORMATION
ID                       : 9
NAME                     : Private VNET
USER                     : alessandro.caetano
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 100
BRIDGE                   : nebula0
VN_MAD                   : bridge
PHYSICAL DEVICE          : eno1
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 2

PERMISSIONS
OWNER                    : um-
GROUP                    : ---
OTHER                    : ---

VIRTUAL NETWORK TEMPLATE
BRIDGE="nebula0"
BRIDGE_TYPE="linux"
DNS="10.2.250.1"
GATEWAY="10.2.250.1"
GUEST_MTU="1500"
NETWORK_ADDRESS="10.2.250.13"
NETWORK_MASK="255.255.255.0"
PHYDEV="eno1"
SECURITY_GROUPS="0"
VN_MAD="bridge"

ADDRESS RANGE POOL
AR 0
SIZE           : 100
LEASES         : 2

RANGE                                   FIRST                               LAST
MAC                         02:00:0a:02:fa:64                  02:00:0a:02:fa:c7
IP                               10.2.250.100                       10.2.250.199


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:38            02:00:0a:02:fa:64    10.2.250.100                          -
0   V:39            02:00:0a:02:fa:65    10.2.250.101                          -

VIRTUAL ROUTERS

Whenever I create a VM in the Master node, the VM has internet connectivity, however, when the VM is in the Node1, it doesn’t. Do I need to create the same bridge in all the nodes that I add to the OpenNebula cluster, if so what’s the point of selecting a bridged network instead of a dummy VM_MAD?


Versions of the related components and OS (frontend, hypervisors, VMs):
OpenNebula

ii  opennebula                             6.0.0.1-1.ce                          amd64        OpenNebula Server and Scheduler (Community Edition)
ii  opennebula-common                      6.0.0.1-1.ce                          all          Common OpenNebula package shared by various components (Community Edition)
ii  opennebula-common-onecfg               6.0.0.1-1.ce                          all          Helpers for OpenNebula onecfg (Community Edition)
ii  opennebula-fireedge                    6.0.0.1-1.ce                          amd64        OpenNebula web interface FireEdge (Community Edition)
ii  opennebula-flow                        6.0.0.1-1.ce                          all          OpenNebula Flow server (Community Edition)
ii  opennebula-gate                        6.0.0.1-1.ce                          all          OpenNebula Gate server (Community Edition)
ii  opennebula-guacd                       6.0.0.1-1.ce                          amd64        Provides Guacamole server for Fireedge to be used in Sunstone (Community Edition)
ii  opennebula-libs                        6.0.0.1-1.ce                          all          OpenNebula libraries (Community Edition)
ii  opennebula-node-kvm                    6.0.0.1-1.ce                          all          Services for OpenNebula KVM node (Community Edition)
ii  opennebula-provision                   6.0.0.1-1.ce                          all          OpenNebula infrastructure provisioning (Community Edition)
ii  opennebula-provision-data              6.0.0.1-1.ce                          all          OpenNebula infrastructure provisioning data (Community Edition)
ii  opennebula-rubygems                    6.0.0.1-1.ce                          all          Ruby dependencies for OpenNebula (Community Edition)
ii  opennebula-sunstone                    6.0.0.1-1.ce                          all          OpenNebula web interface Sunstone (Community Edition)
ii  opennebula-tools                       6.0.0.1-1.ce                          all          OpenNebula command line tools (Community Edition)

Master Node Services

  opennebula-fireedge.service                                                              loaded active running   OpenNebula FireEdge Server
  opennebula-flow.service                                                                  loaded active running   OpenNebula Flow Service
  opennebula-gate.service                                                                  loaded active running   OpenNebula Gate Service
  opennebula-guacd.service                                                                 loaded active running   OpenNebula Guacamole Server
  opennebula-hem.service                                                                   loaded active running   OpenNebula Hook Execution Service
  opennebula-novnc.service                                                                 loaded active running   OpenNebula noVNC Server
  opennebula-scheduler.service                                                             loaded active running   OpenNebula Cloud Scheduler Daemon
  opennebula-ssh-agent.service                                                             loaded active running   OpenNebula SSH agent
  opennebula-sunstone.service                                                              loaded active running   OpenNebula Web UI Server
  opennebula.service                                                                       loaded active running   OpenNebula Cloud Controller Daemon
  opennebula-showback.timer                                                                loaded active waiting   OpenNebula's periodic showback calculation
  opennebula-ssh-socks-cleaner.timer                                                       loaded active waiting   OpenNebula SSH persistent connection cleaner

Master Node

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

Node 1

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

Hi @alessandro.caetano,

Driver bridge will be in charge of configuring the Linux bridge if not present when a VM is instantiated on a host. However, you’ll be in charge of configuring the routing in case you want to have Internet access in VMs.

Cheers.

Hello @rdiaz ,

Are there any good tutorials on how to do that?

Hello, hard help you, if we don’t know your netowrk topology. Looks like you have router/gateway with NAT and you need to setup network for VMs to access this gateway to get internet access

It was easier for me to configure bridges manually in the hosts, I still don’t know the usability of the driver bridge if it’s for me to manually create routes.