Minione: Can not ping any host from VM

I’ve installed Minione to a single Hetzner host for testing and evaluation. The problem is, any VM I create can’t reach anything except local IP address.

I’ve tried with Centos template that comes with Minione, but still no luck. This is VM network configuration and ping status:

Minione creates single virtual network vnet with default gateway 172.16.100.1, but VM can’t even reach that address.

This is output from host ifconfig:

root@minione ~ # ifconfig
enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 94.130.18.47  netmask 255.255.255.192  broadcast 94.130.18.63
        inet6 2a01:4f8:10b:16d0::2  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::921b:eff:fecd:7932  prefixlen 64  scopeid 0x20<link>
        ether 90:1b:0e:cd:79:32  txqueuelen 1000  (Ethernet)
        RX packets 8038453  bytes 11851765923 (11.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1629857  bytes 355940273 (339.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xef200000-ef220000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1312757  bytes 859450512 (819.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1312757  bytes 859450512 (819.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

one-10-0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc00:acff:fe10:6405  prefixlen 64  scopeid 0x20<link>
        ether fe:00:ac:10:64:05  txqueuelen 1000  (Ethernet)
        RX packets 256  bytes 11072 (10.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4  bytes 360 (360.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 9e:8d:0b:b8:5b:49  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

From this, I would say that one-10 is VM NIC, but no IP address is assigned. Not sure is that is OK.
Now, this is what /etc/network/interfaces look like:

### Hetzner Online GmbH installimage

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp0s31f6
iface enp0s31f6 inet static
  address 94.130.18.47
  netmask 255.255.255.192
  gateway 94.130.18.1
  # route 94.130.18.0/26 via 94.130.18.1
  up route add -net 94.130.18.0 netmask 255.255.255.192 gw 94.130.18.1 dev enp0s31f6

iface enp0s31f6 inet6 static
  address 2a01:4f8:10b:16d0::2
  netmask 64
  gateway fe80::1
source /etc/network/interfaces.d/*.cfg

There are also 2 .cfg files referenced here, and created by Minione:
/etc/network/interfaces.d/minionebr.cfg:

auto minionebr
iface minionebr inet static
  address 172.16.100.1
  network 172.16.100.0
  netmask 255.255.255.0
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
  bridge_ports tap0

and /etc/network/interfaces.d/tap.cfg

iface tap0 inet manual
    pre-up ip tuntap add tap0 mode tap user root

Can somebody help me shed some light on this?

Just for further reference, this the VM log:

Thu Jul 23 10:36:40 2020 [Z0][VM][I]: New state is ACTIVE
Thu Jul 23 10:36:40 2020 [Z0][VM][I]: New LCM state is PROLOG
Thu Jul 23 10:36:41 2020 [Z0][VM][I]: New LCM state is BOOT
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/10/deployment.0
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: ExitCode: 0
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: ExitCode: 0
Thu Jul 23 10:36:41 2020 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: iptables-legacy tables present, use iptables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: ip6tables-legacy tables present, use ip6tables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: iptables-legacy tables present, use iptables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: ip6tables-legacy tables present, use ip6tables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: iptables-legacy tables present, use iptables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: ip6tables-legacy tables present, use ip6tables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: iptables-legacy tables present, use iptables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: # Warning: ip6tables-legacy tables present, use ip6tables-legacy to see them
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: ExitCode: 0
Thu Jul 23 10:36:42 2020 [Z0][VMM][I]: Successfully execute network driver operation: post.
Thu Jul 23 10:36:42 2020 [Z0][VM][I]: New LCM state is RUNNING

Well, I managed to solve it. Posting solution as a reference if anyone gets the same problem.

Turns out the problem was in /etc/network/interfaces file. It contained these two conflicting lines:

source /etc/network/interfaces.d/*
source /etc/network/interfaces.d/*.cfg

Once I removed one of them, everything worked

Good catch with the duplicate source.

MiniONE added the second one, preventing the duplicate using the exact grep ‘source /etc/network/interfaces.d/*.cfg’, this could be improved.

What was the OS and version of the host?

Host is Debian 10.

I must say that I had a lot more trouble after this also. The network would stop working intermitently. So what I ended up doing was the following:

  1. install bridge-utils, otherwise was getting error “can not find minionebr” interface
  2. change setting of tap0 interface from manual to auto, otherwise was getting error “interface tap0 does not exist”
  3. persist IPv4 forwarding setting, since it was getting disabled after each reboot

Since this is Minione (intended for evaluation/testing), I guess IPv4 forwarding was not persisted on purpose? Might be good idea to have minione script persist it by default, since this causes confusion and makes people believe OpenNebula is unstable.

I tried to fix the issues (in master branch, https://github.com/OpenNebula/minione/blob/master/minione).

Great. Thanks for the update!