Hello,
Previously I had bridging working without a problem, but after setting up VLAN 802.1Q, following the deployment guide, all VMs attached to that vNet are not reachable from any host inside or outside our OpenNebula environment (and they’re getting their network set up without any trouble).
I did look for any post with my problem, but could not find any (somebody was having similar issues, but it was related to CentOS not handling well his LACP bond setup…).
Our environment:
-
OpeNebula 5.4.1
-
Ubuntu 16.04 LTS
- 802.1Q module is loaded on the KVM host:
root@xxxxxx:~# lsmod | grep 802
8021q 32768 0
garp 16384 1 8021q
mrp 20480 1 8021q
- 802.1Q module is loaded on the KVM host:
-
vNet configuration:
BRIDGE = "onebr10"
DNS = "XX.XX.XX.XX"
FILTER_IP_SPOOFING = "YES"
FILTER_MAC_SPOOFING = "YES"
GATEWAY = "XX.XX.XX.XX"
GUEST_MTU = "1500"
MTU = "1500"
NETWORK_ADDRESS = "XX.XX.XX.XX"
NETWORK_MASK = "XX.XX.XX.XX"
PHYDEV = "bond0"
SECURITY_GROUPS = "0"
VLAN_ID = "481"
VN_MAD = “802.1Q”
“bond0” => it’s a LACP aggregation that has been tested and it’s working no problem.
-
VM creation log:
Wed Jan 31 12:17:38 2018 [Z0][VM][I]: New state is ACTIVE
Wed Jan 31 12:17:38 2018 [Z0][VM][I]: New LCM state is PROLOG
Wed Jan 31 12:17:43 2018 [Z0][VM][I]: New LCM state is BOOT
Wed Jan 31 12:17:43 2018 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/89/deployment.0
Wed Jan 31 12:17:43 2018 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: pre: Executed “sudo brctl addbr onebr10”.
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: pre: Executed “sudo ip link set onebr10 up”.
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: pre: Executed "sudo ip link add link bond0 name bond0.481 mtu 1500 type vlan id 481 ".
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: pre: Executed “sudo ip link set bond0.481 up”.
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: pre: Executed “sudo brctl addif onebr10 bond0.481”.
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: ExitCode: 0
Wed Jan 31 12:17:44 2018 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Jan 31 12:17:46 2018 [Z0][VMM][I]: ExitCode: 0
Wed Jan 31 12:17:46 2018 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Wed Jan 31 12:17:46 2018 [Z0][VMM][I]: ExitCode: 0
Wed Jan 31 12:17:46 2018 [Z0][VMM][I]: Successfully execute network driver operation: post.
Wed Jan 31 12:17:46 2018 [Z0][VM][I]: New LCM state is RUNNING -
Dynamic bridge created by OpenNebula on KVM host:
root@xxxxxx:~# brctl show
bridge name bridge id STP enabled interfaces
onebr10 8000.6cae8b1edf1a no bond0.481
one-89-0
virbr0 8000.52540079c46e yes virbr0-nic
Could anybody point if I am missing anything, please??
Although the documentation says that I don’t have to do any extra configuration on the on the KVM host, should I actually do something else? Like enabling 802.1Q on the “bond0” interface?
Thanks a lot,
Alex
Versions of the related components and OS (frontend, hypervisors, VMs):
Steps to reproduce:
Current results:
Expected results: