802.1q setup to bridge interface

Hi Guys,

First post on here but looking for some help really.
Currently running opennebula freshly installed on Ubuntu 16.04.
Everything works fine using Linux Bridges.

I have been trying to setup 802.q virtual networks but can’t seem to get traffic accross the port tagged.

Setup is as follows:

Opennebula host -> fortigate Firewall.

dot1q tag terminated on the Fortiagte as a sub interface on the internal.

Format: 10.100.VLAN.254/24

Looking to get the following

VM -> VLAN -> Host -> Fortigate

So the tagged traffic is carried from the VM up to the firewall where its gateway will be, we have set the gateway etc. in the contextulisation part of the network.
This is early days and there will be switch infastructure in place however to get it working we have labbed this up simply. Wanting the gateway to sit on the firewall so we can firewall access between networks and shared services.

we know that 802.1q tagging works on the host to the firewall if we create this manually in linux we can then ping from the host to the firewall etc.
the issue we are seeing is when setting up a 802.q network in opennebula the traffic does not reach the firewall and the VM cannot get out.

Interface config

This file describes the network interfaces available on your system

and how to activate them. For more information, see interfaces(5).

auto lo
iface lo inet loopback

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
auto br0.12
iface br0.12 inet dhcp
vlan-raw-device br0

** The br0.12 is a manually created vlan interface set to dhcp for testing and it recieves an ip address and connectivity is fine **

I have tried the following which does boot the VM and creates the br0.vlan

When creating the virtual netwrking leaving bridge blank and setting the physical as br0

Any help would be appreciated

Hello David,

I also spent “some” time to have this working for us and came up with setting up our KVM servers like listed below.
I can’t tell for sure if this is the best solution (performance wise), as I still think I end up with too many layers of network abstractions, but so far is going well…

In your case, you should replace “bond0” by “eth0”, as we’re using LACP port aggregation for high-availability. (This is Ubuntu 16.04 LTS, by the way.)


auto eno2
iface eno2 inet manual
bond-master bond0

eno3 configuration

auto eno3
iface eno3 inet manual
bond-master bond0

bond0 configuration

It uses standard IEEE 802.3ad LACP bonding protocol

auto bond0
iface bond0 inet manual
bond-slaves eno2 eno3
bond-mode 4
bond-miimon 100
bond-lacp-rate fast
bond-downdelay 0
bond-updelay 0
bond-xmit_hash_policy 1

local interface for vlan 481

auto bond0.481
iface bond0.481 inet static
vlan-raw-device bond0

bridge for vlan 481

auto br481
iface br481 inet static
bridge_ports bond0.481
address xxx.yyy.zzz.ccc
network xxx.yyy.zzz.0
broadcast xxx.yyy.zzz.255
gateway xxx.yyy.zzz.254
dns-nameservers xxx.yyy.zzz.bbb
dns-search example.com
bridge_hello 2
bridge_maxage 12
bridge_stp off
bridge_fd 9

Bottom line, I created a tagged virtual interface on top of the physical interface and then associated the tagged virtual interface to the bridge.

Then, on OpenNebula, I created a 802.1q vNet where I set the bridge to “br481” and did not specify a physical device for it.

Hope this help,


Hi Alex

Thanks for the response I ended up using this approach initially as the 802.1q method wasn’t working, it works perfectly however the network interfaces config file ends up huge.

I managed to get the 802.1q method working the trick was not creating the vlans first on the host.

Then when creating a virtual network create as 802.1q and just add the physical interface as br0, leaving the bridge blank meant it was called one-sonething.vlan.

It created a bridge as br0.vlan and the virtual bridge bridged to this.

I will put the config in here once I get back in the office tomorrow as also managed to script it and bulk create batches of vlans.

Hello, you can just create bond0 or use just eth0 and in opennebula network config you select 802.1q and physical interface set to bond0 or eht0… Opennebula automatically executes these commands

pre: Executed "sudo brctl addbr onebr.103".
pre: Executed "sudo ip link set onebr.103 up".
pre: Executed "sudo ip link add link team0 name team0.103 mtu 1500 type vlan id 103 ".
pre: Executed "sudo ip link set team0.103 up".
pre: Executed "sudo brctl addif onebr.103 team0.103".

In my case I use team0.

Also you can configure custom ip link options.
On some networks I use IP_LINK_CONF = "gvrp=on" to configure new vlan automatically on switch.