Network setup walkthrough in OpenNebula 5.02

I just finished setting up the front-end and two KVM hosts in OpenNebula 5.02, one of them is both frontend and a host (I call this eagle) and the other is only a host (I call this orion). So I can manage images, create templates, create VMs, but I cannot ping them nor ssh the VMs.

I want to use a bridge setup which gets a dynamic IP.
I believe the problem is with my bridging setup so here is what I have:

On the host-only computer(orion) I made a /etc/sysconfig/network-scripts/ifcfg-br0:

DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=dhcp
NM_CONTROLLED=no

And I also made a /etc/sysconfig/network-scripts/ifcfg-enp0s25:

DEVICE=enp0s25
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0

Another noticable thing is that it takes effect after service network restart but not after rebooting, I always have to run service network restart after booting.

Here is what ifconfig looks like on the host-only computer(orion):

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.60.236  netmask 255.255.255.0  broadcast 192.168.60.255
        inet6 fe80::be30:5bff:feb1:b9e5  prefixlen 64  scopeid 0x20<link>
        ether bc:30:5b:b1:b9:e5  txqueuelen 0  (Ethernet)
        RX packets 13668  bytes 13612356 (12.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8572  bytes 1043728 (1019.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::be30:5bff:feb1:b9e5  prefixlen 64  scopeid 0x20<link>
        ether bc:30:5b:b1:b9:e5  txqueuelen 1000  (Ethernet)
        RX packets 16212  bytes 14038320 (13.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8651  bytes 1088559 (1.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 21  memory 0xf7ae0000-f7b00000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 48  bytes 4656 (4.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 48  bytes 4656 (4.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:3f:ca:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

On the front-end+host machine(eagle) I have not defined a bridge yet, because when I did I lost the ability to log in to the Sunstone UI from other computers. I was still able to reach it from the front-end machine on localhost:9869 but not from other computers using 192.168.60.248:9869 as without a bridge setup.

ifconfig on my front-end+host machine(eagle) shows:

enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.60.248  netmask 255.255.255.0  broadcast 192.168.60.255
        inet6 fe80::be30:5bff:feb2:7e29  prefixlen 64  scopeid 0x20<link>
        ether bc:30:5b:b2:7e:29  txqueuelen 1000  (Ethernet)
        RX packets 58592  bytes 40704760 (38.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36596  bytes 6955757 (6.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 21  memory 0xf7ae0000-f7b00000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 60976  bytes 373238888 (355.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 60976  bytes 373238888 (355.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:a1:c9:a8  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Can you help me please guide through this? I have spent two days trying to fix this, reading a lot of forums and tutorials, but could not find a solution. How should I set the bridge up with this setup?

I assume that the VMs are attached to an OpenNebula Network using br0. And
if you start 2 VMs in that network they cannot ping each other, right? (You
need to test this by login on the VMs through VNC) If that’s your problem
you need to review any filtering rule in the hypervisors. It may be useful
to get the tcpdump traces from the VM originating the ping, and see where
you are getting the communication cut. You may also want to double check
the IP configuration of the VMs, just in case.

this sounds to me like your /etc/init.d/networking might not be in the default runlevel? I remember running into this problem on alpine linux.

try
# rc-status