VMs can't route out to Internet but can talk to frontend fine

Sorry about how long this is, I’m probably doing something wrong but I’ve searched the Opennebula docs and don’t have any clues on how to approach it. I have a VM up on a virtual network and I assigned it the IP address 192.168.122.10. From it, I can ping 192.168.122.1, which I guess is the KVM node where it’s running. My config:

[oneadmin@frontend ~]$ onevnet show 1
VIRTUAL NETWORK 1 INFORMATION
ID                       : 1
NAME                     : vmnet0
USER                     : oneadmin
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 0
BRIDGE                   : virbr0
VN_MAD                   : 802.1Q
PHYSICAL DEVICE          : eth0
VLAN ID                  : 40
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : ---
OTHER                    : ---

VIRTUAL NETWORK TEMPLATE
BRIDGE="virbr0"
BRIDGE_TYPE="linux"
GATEWAY="192.168.122.2"
NETWORK_MASK="255.255.255.0"
OUTER_VLAN_ID=""
PHYDEV="eth0"
SECURITY_GROUPS="0"
VLAN_ID="40"
VN_MAD="802.1Q"

ADDRESS RANGE POOL
AR 0
SIZE           : 20
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         02:00:c0:a8:7a:0a                  02:00:c0:a8:7a:1d
IP                             192.168.122.10                     192.168.122.29


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:34            02:00:c0:a8:7a:0a  192.168.122.10                          -

VIRTUAL ROUTERS

On my frontend node I configured an interface 192.168.122.2, and I can ping it from the VM.
The frontend has a NAT rule to get out to the internet which works - I can ping 8.8.8.8.
I also added a NAT rule on my frontend for my virtual network to get out:

Chain POSTROUTING (policy ACCEPT 4 packets, 256 bytes)
 pkts bytes target     prot opt in     out     source               destination
   21  1509 MASQUERADE  all  --  any    eth1    10.xxx.xx.xxX/24      anywhere
    6   504 MASQUERADE  all  --  any    eth1    192.168.122.0/24     anywhere

From my VM I can ping the 192.168.122.2 address on the frontend, but can’t get out.
On my VM I set the default gateway to 192.168.122.2. From the KVM node I seem to see the VM ARPing for 8.8.8.8:

[root@hpc-ctest1 ~]# tcpdump -i virbr0 -v "icmp or arp"
tcpdump: listening on virbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:34:38.706377 IP (tos 0x0, ttl 64, id 29579, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.122.10 > dns.google: ICMP echo request, id 1550, seq 1, length 64
20:34:39.707206 IP (tos 0x0, ttl 64, id 29882, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.122.10 > dns.google: ICMP echo request, id 1550, seq 2, length 64
20:34:43.718741 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has hpc-ctest1.hpc tell 192.168.122.2, length 46
20:34:43.718753 ARP, Ethernet (len 6), IPv4 (len 4), Reply hpc-ctest1.hpc is-at 00:21:28:84:02:08 (oui Unknown), length 28

The KVM is learning MACs:

[root@hpc-kvm1 ~]# brctl showmacs virbr0
port no mac addr                is local?       ageing timer
  2     00:21:28:84:02:08       yes                0.00
  2     00:25:03:1b:f9:22       no                 0.79
  1     52:54:00:5f:2a:07       yes                0.00
  ...
  ...

On the frontend if I do a tcpdump on the 192.168 interface I see ICMP packets coming in from the VM,
but only if I’m pinging the frontend. If I ping 8.8.8.8 I don’t see anything from the VM.

My KVM node has the default iptables - I haven’t made any security group customizations in Opennebula:

[root@hpc-kvm1 ~]# iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 176 packets, 12184 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 150 packets, 10096 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 693 packets, 429K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 703 packets, 430K bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       192.168.122.0/24     224.0.0.0/24
    0     0 RETURN     all  --  *      *       192.168.122.0/24     255.255.255.255
    0     0 MASQUERADE  tcp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
   16  1344 MASQUERADE  all  --  *      *       192.168.122.0/24    !192.168.122.0/24

I’m thinking with the 192.168.122.2 address on the frontend set as the VM default gateway and a NAT rule set to allow traffic from the virtual network I should be able to route out. On the frontend and the KVM host I have:

net.ipv4.ip_forward = 1

Maybe I’m just trying to do this all wrong, but can anyone suggest a better way to have my VMs route out?

Probably my configuration is stupid, but I’d welcome any pointers to doing it better in Opennebula. Any ideas why this isn’t working, like mistakes in my routing and iptables? Thanks a lot.

Never mind, I got it to work. I had to set the default gateway to the virtual network 192.168.122.1 address and get rid of the VLAN stuff and the 192.168.122.x interface on the frontend.

VIRTUAL NETWORK TEMPLATE
BRIDGE="virbr0"
BRIDGE_TYPE="linux"
DNS="128.113.222.111"
GATEWAY="192.168.122.1"
NETWORK_ADDRESS="192.168.122.0"
NETWORK_MASK="255.255.255.0"
OUTER_VLAN_ID=""
PHYDEV=""
SECURITY_GROUPS="0"
VLAN_ID=""
VN_MAD="bridge"

Is this documented anywhere?