Open vSwitch + KVM + Single NIC

I am running KVM hosts using Open vSwitch for networking. I am having trouble giving VM’s public network access via Open vSwitch. I currently do not have a cable in my other NIC, and for that reason I cannot create a bridge on my only public network connection.

Here are the steps that I took to set up the public network:

Create /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs
BOOTPROTO=static
HOTPLUG=no
IPADDR=x.x.x.x
NETMASK=255.255.255.0
MTU=1500

Bring up interface ifup br0 (ok)

Check OVS ovs-vsctl show

Bridge "br0"
    Port "br0"
        Interface "br0"
            type: internal
ovs_version: "2.5.1"

I then create a virtual network using the openvswitch plugin.

After the VM is created, output of ip a

enter image description here

The VM cannot ping the gateway. However, the host and the controller node can. Neither the host nor the controller node can ping the VM IP.

P.S. Repeating the same process for a private IP (I used 10.17.0.1) allows the hosts to SSH to the VM. I followed the answer from this post.

looks like you need to add the physical interface that is wired as a bridge slave. not sure the syntax for the ifcfg file but you want something like this in the end:

ovs-vsctl show
ebc2ef44-4a77-4c77-a4fc-a170c2769d65
    Bridge "ovsbr0"
        Port "enp3s0f0"
            Interface "enp3s0f0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    ovs_version: "2.5.0"

 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::225:90ff:fe13:8ce6/64 scope link
       valid_lft forever preferred_lft forever
...
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 7e:dc:40:3c:12:ad brd ff:ff:ff:ff:ff:ff
5: ovsbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.2/24 brd 10.1.1.255 scope global dynamic ovsbr0
       valid_lft 57848sec preferred_lft 57848sec
    inet6 fe80::225:90ff:fe13:8ce6/64 scope link
       valid_lft forever preferred_lft forever
1 Like

Two things about this.

  1. Since I have a single NIC for use here, I would lose connectivity if I were to use my public interface as the bridge slave. Is there any way around this that you know of?

  2. I will certainly try this suggestion again. However, I have tried it before. I lose network connectivity on the host, as is expected, but I do not gain external network connectivity on the VM. The only thing that changes is that the hosts are able to ping the VM IP. I expect that this means that using the public interface as the bridge slave is the correct configuration, but it still doesn’t completely solve the problem.

If like in my example you put the IP address on the bridge interface then the host still gets ip connectivity to the network I didn’t look at your screen cap that closely. but I suspect that your ip is still on ens3 becaue you didn’t specify any bridge slaves in the ifcfg file.

I use openvswitch to do exactly what you want, but I don’t use that ifcfg file so i’m not exactly sure what the syntax needs to be. but as a test flush the ip off of ens3, add it to the bridge, and then add an ip to br0. it’ll work for you.

I now have the following configuration:

/etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs
BOOTPROTO=static
HOTPLUG=no
IPADDR=x.x.x.2
GATEWAY=x.x.x.1
NETMASK=255.255.255.0
DNS1=8.8.8.8
DNS2=8.8.4.4

/etc/sysconfig/network-scripts/ifcfg-enp6s0

TYPE=OVSPort
DEVICETYPE=ovs
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=enp6s0
DEVICE=enp6s0
ONBOOT=yes
HWID=00:25:90:88:1E:EA
OVS_BRIDGE=br0

ovs-vsctl show

68f02c9b-8a56-4bbc-923f-c45a51b215eb
Bridge "br0"
    Port "br0"
        Interface "br0"
            type: internal
    Port "enp6s0"
        Interface "enp6s0"
ovs_version: "2.5.1"

Still no network connectivity. :confused:

Hi,

I have a KVM host with a single NIC connected to the network, and 3 OVS bridges, only one is “connected” to Internet.
ovs-vsctl show:

Bridge "br0"
      Port "br0"
            Interface "br0"
                type: internal
        Port "one-24-0"
            Interface "one-24-0"
        Port "one-12-1"
            Interface "one-12-1"
        Port "one-22-1"
            Interface "one-22-1"
        Port "eth0"
            Interface "eth0"
        Port "one-26-0"
            Interface "one-26-0"
        Port "one-29-0"
            Interface "one-29-0"

As you can see, eth0 is present.

And my network configuration is done like:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
    link/ether 14:02:ec:43:33:98 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1602:ecff:fe43:3398/64 scope link
       valid_lft forever preferred_lft forever
[...]
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 14:02:ec:43:33:98 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.1/16 brd 172.16.255.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::1602:ecff:fe43:3398/64 scope link
       valid_lft forever preferred_lft forever

And it is working like a charm :slight_smile:
I think your problem is now that your VM nic is not set into the bridge.
So that, network connectivity is lost, as (un)expected…

Try to add manually the port of the VM (it should not be ens3, no ? something like one-xx-0 ?)

For reference, my ifcfg-eth0:

 DEVICE="eth0"
 ONBOOT=yes
 UUID="7bb324da-72c8-49c2-be3e-d7e4969c3bcc"
 TYPE=Ethernet
 IPV6INIT=no
 BOOTPROTO=static
 TYPE=Ethernet
 NM_CONTROLLED=no
 TYPE="OVSPort"
 DEVICETYPE="ovs"
 OVS_BRIDGE="br0"
 NOZEROCONF="yes"

and ifcfg-br0

DEVICE="br0"
BOOTPROTO="none"
IPADDR=172.16.0.1
NETMASK=255.255.0.0
DNS1=127.0.0.1
DNS2=172.16.1.254
GATEWAY=172.16.1.254
DEFROUTE="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSBridge"
DEVICETYPE="ovs"
NOZEROCONF="yes"
1 Like

Does your host have internet connectivity (e.g. ping google.com)?

I have pretty much copied your exact configuration, changing IP addresses of course, and I still have no external connectivity on my host. Haven’t tried creating a VM yet.

I’m beginning to think there is something else that is wrong, not just the network/ovs config…

Here is my routing table. To my knowledge, this should be fine.

I understand that the scope of this question has possibly transcended OpenNebula, but I appreciate the help you guys are still providing!

Hi

Does eth0 (or the real physical NIC name) belongs to br0 ?
Can you post ovs-vsctl show br0 ?
Can you reach using arping your gw using ANY of the present interface ?

Nicolas

It does, yes. enp6s0 is the device on the host.

And you IP is set on enp6s0 ?
Or on br0 ?
Can you dump your iptables (or equiv) settings on local host ?

Nicolas.

I have all firewalls disabled as well as SELinux disabled.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000
    link/ether 00:25:90:88:1e:ea brd ff:ff:ff:ff:ff:ff
3: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 90:e2:ba:19:f7:28 brd ff:ff:ff:ff:ff:ff
    inet 10.100.11.3/24 brd 10.100.11.255 scope global enp2s0f0
       valid_lft forever preferred_lft forever
    inet6 fe80::92e2:baff:fe19:f728/64 scope link 
       valid_lft forever preferred_lft forever
4: enp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:88:1e:eb brd ff:ff:ff:ff:ff:ff
5: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 90:e2:ba:19:f7:29 brd ff:ff:ff:ff:ff:ff
    inet 10.100.10.6/24 brd 10.100.10.255 scope global enp2s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::92e2:baff:fe19:f729/64 scope link 
       valid_lft forever preferred_lft forever
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:1f:a9:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:1f:a9:95 brd ff:ff:ff:ff:ff:ff
9: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 5e:60:5a:a6:29:7c brd ff:ff:ff:ff:ff:ff
10: ovsbr: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 56:b8:7a:45:6c:4a brd ff:ff:ff:ff:ff:ff
11: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP qlen 1000
    link/ether 9e:70:07:69:3d:a7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c70:7ff:fe69:3da7/64 scope link 
       valid_lft forever preferred_lft forever
12: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP qlen 1000
    link/ether 26:ca:66:4a:ad:09 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::24ca:66ff:fe4a:ad09/64 scope link 
       valid_lft forever preferred_lft forever
14: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:25:90:88:1e:ea brd ff:ff:ff:ff:ff:ff
    inet 104.245.107.2/24 brd 104.245.107.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 2602:ff97:0:1:225:90ff:fe88:1eea/64 scope global mngtmpaddr dynamic 
       valid_lft 2591831sec preferred_lft 604631sec
    inet6 fe80::225:90ff:fe88:1eea/64 scope link 
       valid_lft forever preferred_lft forever
25: one-86-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000
    link/ether fe:00:68:f5:6b:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:68ff:fef5:6b03/64 scope link 
       valid_lft forever preferred_lft forever

As you can see, there is a VM set up now.
Edit: I followed this guide: https://kashyapc.fedorapeople.org/virt/openvswitch-and-libvirt-kvm.txt

Hi nwilging

did you ever get this resolved?

""Cheers
G

Hi

I have almost the same issue, could you help me? OpenNebula + KVM + Open VSwitch this is my ticket…

Regards

Hi Horacio

I’ll post my config in my next reply as I’m away from my notes atm.

What worked for me was a simpler approach with the config, do you only have 1 Nic or multiple?

Part of the solution after compiling ovs was to disable firewalld and reboot the host, with the simpler config this action helped me get ovs running.

I’m using 2 Nic solution.

1 Nic - management
1 Nic - Public

The physical Nic 2 has no public access.
The management Nic 1 allows for public ingress via our internal GW only.
Keeping the 2 networks seperate helps isolate traffic and in the event of Nic issues we can still have management access.

Will post the config shortly.

“”Cheers
G

Open vSwitch Bridge

#     cat ifcfg-br0

DEVICE=br0
TYPE=OVSBridge
DEVICETYPE=ovs
ONBOOT=yes

Public

# cat ifcfg-enp4s0f0

TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br0
ONBOOT=yes
NM_CONTROLLED=no
NAME=enp4s0f0
DEVICE=enp4s0f0
  • save the 2 configs.
  • set ovs service to load on boot
  • reboot

you should end up with the following after reboot and adding a VM via sunstone to a OVS backed network set to manual or Auto vLAN:

# ovs-vsctl show
cc64ecf1-33db-423a-a474-ea4367473fed
    Bridge "br0"
        Port "enp4s0f0"
            Interface "enp4s0f0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "one-6-0"
            tag: 10
            Interface "one-6-0"
        Port "one-5-0"
            tag: 10
            Interface "one-5-0"
    ovs_version: "2.5.5"

Something to keep in mind would be if you want the VM’s to reach public access via the same vLAN would be that the switch ports are tagged with the appropriate vLAN’s and can reach public internet.

hope the above helps.

""Cheers
G

Thanks for your answer… I did the same as you but still the VM are no able to reach Internet, I did a ping from VM to Internet, but the VM cannot reach the GW. Could you take a look in my post OpenNebula + KVM + Open VSwitch I include more details…

Thanks for the advice…

Regards!
HB

Hi Horacio

im not a OVS expert by any stretch of the imagination and can only advise what is working for me.

When looking at your config for Br0 and eno1 they have a lot more content in them than my posted solution.

again not an expert here but if its currently not working just copy and paste in my config and make small changes to reflect eno1.

Reboot the host.

make sure in Sunstone > Network > network name xxx > set to open vSwitch > make sure there is a GW defined in the IP delegation section.

Add this to a VM
test.

Once you have it working on the clean config as per the previous posted example then playing with changes incrementally will let you know what works and doesn’t work.

Is your internet on a dedicated vLAN or is it open and untagged?

If your internet is untagged then adding a vLAN to a VM will mean that its isolated and wont be able to reach the public internet.

Maybe someone with more knowledge on the subject can chime in.

good luck.

""Cheers
G

Hello,

Sorry for my late answer, I did the changes that you suggest me, also I want to comment you that Im using just one NIC on the physical server, the configuration about the OVS Bridge works fine the physical machine can reach internet thru OVS Bridge,

about your comments… This part is not complety clear for me, why? because from my VM a can’t reach the GW.

make sure in Sunstone > Network > network name xxx > set to open vSwitch > make sure there is a GW defined in the IP delegation section.

This is my VNET template:

BRIDGE = “br0”
CLUSTERS = “default”
DESCRIPTION = “network for testing purposes”
DNS = “130.10.10.1”
GATEWAY = “130.10.10.1”
NETWORK_ADDRESS = “130.10.10.0”
NETWORK_MASK = “255.255.255.0”
PHYDEV = “”
SECURITY_GROUPS = “0”
VLAN_ID = “10”
VN_MAD = “ovswitch”

As far I as understand the GW define there will be used in the VM context right? but how should I setup the VNET GW?

Could you show me your VNET template?

Regards!

Hi Horacio

to clarify:

VM’s can hit the public internet so they are traversing the GW to reach the public internet.

if im not mistaken the 130.10.10.1 subnet is a public IP range please correct me if i’m mistaken.

VNet is the virtual network for your VM’s if they are hitting the internet then im not understanding the problem.

Are you advising that you would like to have your single Nic on the host also have internet access?

what is the actual goal your trying to achieve here:

1 Nic on public
VM on public via 1 Nic
Are you looking to use vLans?
Is your physical switch configured with vLans?

sorry i may not understand your question/ requirements properly.

""Cheers
G