First post on here but looking for some help really.
Currently running opennebula freshly installed on Ubuntu 16.04.
Everything works fine using Linux Bridges.
I have been trying to setup 802.q virtual networks but can’t seem to get traffic accross the port tagged.
Setup is as follows:
Opennebula host -> fortigate Firewall.
dot1q tag terminated on the Fortiagte as a sub interface on the internal.
Looking to get the following
VM -> VLAN -> Host -> Fortigate
So the tagged traffic is carried from the VM up to the firewall where its gateway will be, we have set the gateway etc. in the contextulisation part of the network.
This is early days and there will be switch infastructure in place however to get it working we have labbed this up simply. Wanting the gateway to sit on the firewall so we can firewall access between networks and shared services.
we know that 802.1q tagging works on the host to the firewall if we create this manually in linux we can then ping from the host to the firewall etc.
the issue we are seeing is when setting up a 802.q network in opennebula the traffic does not reach the firewall and the VM cannot get out.
This file describes the network interfaces available on your system
and how to activate them. For more information, see interfaces(5).
iface lo inet loopback
iface br0 inet dhcp
iface br0.12 inet dhcp
** The br0.12 is a manually created vlan interface set to dhcp for testing and it recieves an ip address and connectivity is fine **
I have tried the following which does boot the VM and creates the br0.vlan
When creating the virtual netwrking leaving bridge blank and setting the physical as br0
Any help would be appreciated
I also spent “some” time to have this working for us and came up with setting up our KVM servers like listed below.
I can’t tell for sure if this is the best solution (performance wise), as I still think I end up with too many layers of network abstractions, but so far is going well…
In your case, you should replace “bond0” by “eth0”, as we’re using LACP port aggregation for high-availability. (This is Ubuntu 16.04 LTS, by the way.)
iface eno2 inet manual
iface eno3 inet manual
It uses standard IEEE 802.3ad LACP bonding protocol
iface bond0 inet manual
bond-slaves eno2 eno3
local interface for vlan 481
iface bond0.481 inet static
bridge for vlan 481
iface br481 inet static
Bottom line, I created a tagged virtual interface on top of the physical interface and then associated the tagged virtual interface to the bridge.
Then, on OpenNebula, I created a 802.1q vNet where I set the bridge to “br481” and did not specify a physical device for it.
Hope this help,
Thanks for the response I ended up using this approach initially as the 802.1q method wasn’t working, it works perfectly however the network interfaces config file ends up huge.
I managed to get the 802.1q method working the trick was not creating the vlans first on the host.
Then when creating a virtual network create as 802.1q and just add the physical interface as br0, leaving the bridge blank meant it was called one-sonething.vlan.
It created a bridge as br0.vlan and the virtual bridge bridged to this.
I will put the config in here once I get back in the office tomorrow as also managed to script it and bulk create batches of vlans.
Hello, you can just create bond0 or use just eth0 and in opennebula network config you select 802.1q and physical interface set to bond0 or eht0… Opennebula automatically executes these commands
pre: Executed "sudo brctl addbr onebr.103".
pre: Executed "sudo ip link set onebr.103 up".
pre: Executed "sudo ip link add link team0 name team0.103 mtu 1500 type vlan id 103 ".
pre: Executed "sudo ip link set team0.103 up".
pre: Executed "sudo brctl addif onebr.103 team0.103".
In my case I use team0.
Also you can configure custom ip link options.
On some networks I use
IP_LINK_CONF = "gvrp=on" to configure new vlan automatically on switch.