I’m wondering how many of the deployments in the survey (http://opennebula.systems/resources/survey/) use Open vSwitch as their virtual switching / bridging solution. It’s cool that there’s VXlan support in OpenNebula, but this forces the use of 802.1q network drivers. Is this still in common use? Would VXlan support for the Open vSwitch driver be difficult to implement? Is this something that might be addressed in the BEACON project? As far as I can see Open vSwitch is / will become the dominant virtual switch provider. Speaking about network virtualization encapsulation solutions: any plans to support GENEVE (http://tools.ietf.org/html/draft-gross-geneve-01 / http://blogs.vmware.com/cto/geneve-vxlan-network-virtualization-encapsulations/)?
there is no dependency beteween the vxlan and 802.1q, apart from both using
the kernel network stack and iproute2 tools.
vxlan support can be fairly easily implememted in ovswtich but the linux
kernel implementation provides two important features: iptables support and
so security groups and advanced filtering; and obviously get rid of another
I’m not really sure about openvswitch replacing the bridge functionality in
the kernel, or vlan/encapsulation techs (802.1q, vxlan or gre-tap.)AFAIK
there are devel campaign in other cloud platforms to use the kernel vxlan
implementation (and the bridge fdb tables features…)
We’ll probably see enhanced support for NFVs coming from BEACON…
There are some opportunities to integrate OpenNebula with Open Vritual Network (from the OpenvSwtich team). AFAIK is using GENEVE as encapsulation and it is now under heavy development.
Do you have any experience with this? Are you interested in this integration to create virtual networks (as opposed of using the current VXLAN or 802.1Q drivers).
There are some opportunities to integrate OpenNebula with Open Vritual
Network (from the OpenvSwtich team). AFAIK is using GENEVE as
encapsulation and it is now under heavy development.
I’ve read their design document a few months ago:
and it looks really promising. Most common use cases will be possible.
Do you have any experience with this?
OVS can talk most tunnel protocols: VXLAN, GRE, STT, GENEVE: just the
"type" is changed when creating a tunnel. VXLAN hardware support is
(already) integrated in Silicon in most recent network hardware. We will
proably be using VXLAN in the future, to transport customer (V)LANS.
Are you interested in this integration to create virtual networks (as
opposed of using the current VXLAN or 802.1Q drivers).
For sure! We like OpenvSwitch a lot. And it will make the life of OpenNebula
a lot easier too, quoting the Open Virtual Network (OVN) Proposed Architecture
“From another viewpoint, the LN (logical network) is a slave of the cloud
management system running northbound of OVN. That CMS determines the entire
OVN logical configuration and therefore the LN’s content at any given time is a
deterministic function of the CMS’s configuration. From that viewpoint, it
might be necessary only to have a single master (the CMS) provide atomic
changes to the LN. Even durability may not be important, since the CMS can
always provide a replacement snapshot.”
"Cloud Management System
OVN requires integration with the cloud management system in use. We
will write a plugin to integrate OVN into OpenStack. The primary job
of the plugin is to translate the CMS configuration, which forms the
northbound API, into logical datapath flows in the OVN LN database.
The CMS plugin may also update the PN.
A significant amount of the code to translate the CMS configuration
into logical datapath flows may be independent of the CMS in use. It
should be possible to reuse this code from one CMS plugin to another."
Another thing I like about it is the “local controller / distributed”:
setup. There is no need for a SDN controller. Although a SDN controller
could be complementary to the setup later on, as OVN uses OVSDB, just
like a SDN would.