CentOS 7 + VXLAN

Anyone have experiance with ONe + CentOS + VXLAN?
I use Openvswitch but have task migrate to VXLAN. I try add VXLAN network but every time have error - Can’t add port to bridge .
Maybe anyone can write small how-to?

would you paste here log from VM creation - part where network was creatin

Now VM can start without errors (I forget add PHYDEV) but network not working. I can’t find any docs about VXLAN in Centos 7 and kvm. On test servers I can create vxlan network only between physical servers.

I thing what I imagine is that on centos 7 you hve to disable firewalld, because opennebula uses iptables and adds own forward rules and also enables bridge net filter. also you have to enable ipv4 forward in sysctl

I’m using a modifed OVS driver supporting VXLAN for communication between hypervisors. Works fine. It’s a kind of merge between OVS and VXLAN drivers. So each VNI becomes a VLAN domain. But since it’s build over the OVS driver you won’t have security groups (dunno if you have them with the VXLAN driver tho)

You can provide yours ifconfig, brctl show and ovs-vsctl show?

Hi Anton,

Here is what is important:

Each VNI (VXLAN) as it’s own ovs Bridge (brvx$VNI), with an interface vx$VNI as internal port used for connecting other members of this VXLAN on other hypervisors

Bridge "brvx20208"
        Port "one-4024-5"
            tag: 208
            Interface "one-4024-5"
        Port "one-4024-1"
            tag: 208
            Interface "one-4024-1"
        Port "one-4024-4"
            tag: 208
            Interface "one-4024-4"
        Port "brvx20208"
            Interface "brvx20208"
                type: internal
        Port "vx20208"
            Interface "vx20208"
        Port "one-4024-3"
            tag: 208
            Interface "one-4024-3"
        Port "one-4024-2"
            tag: 208
            Interface "one-4024-2"

[root] # ip -d l show dev vx20208
447: vx20208: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000
    link/ether 0e:c5:c1:f0:e7:7c brd ff:ff:ff:ff:ff:ff promiscuity 1 
    vxlan id 20208 group dev vlan4060 srcport 0 0 dstport 8472 ttl 16 ageing 300 
    openvswitch_slave addrgenmode eui64

Each VX interface use a MC group. This MC group IP is calculated from portion of code from the ONE VXLAN driver.

Sorry I don’t have ifconfig nor brctl commands.

So if you use a ONE VNET with same VNI and VLAN, VM can communicate.
Hope it helps.

Hello Edouard,
What is vx20208 interface?

vx20208 is the VXLAN interface that is given to the OVS bridge, allowing ethernet frames to be encapsulated inside the VXLAN. It’s like an uplink on a switch, but this one will use VXLAN to communicate to other switches (like other OVS bridges on other hypervisors).

Thanks, but it did not help me. Probably I have little experience to use vxlan. :slight_smile:

Now I have working VXLAN network but only on one node if I migrate VM to another node - no network.
What’s wrong?

Resolve problem. (wrong network settings). Now all works excellent.
Thanks to all!
Happy Christmas and New Year!