Huge security issue with host bridge interface and promiscuous mode on guest NIC?

On a 802.1Q network, I can see network packets destined to other IPs if I use Wireshark in promiscous mode on a Linux guest. Maybe the problem is affecting ebtables network too.

My 802.1Q network is using a bond interface in balanced-alb mode as PHYDEV on each host.
The NIC on the VM is using the virtio driver.
Hosts are on CentOS 7.2 with qemu-kvm-ev-2.3.0-31.el7_2.4.1.

I try to figure out which may be the problem : the bond interface, the vlan interface, qemu/kvm, iptables filter, OpenNebula !?! I’m a little bit lost and I’m not very comfortable to let users using the platform in this context. :frowning:

Hi,

What kind of traffic are you seeing (unicast, multicast)? Is the traffic
from different VLANs? Do you see the tag header in the ehternet frame in
the guest? Have you tried to capture the traffic at the host level (what
traffic is reaching the bridge? what is coming out from the VM port?)

Maybe you could save and send the wireshark capture…

Chhers

Hi Ruben,

I’m seeing unicast, broadcast and multicast packets from different VLANs and from the other guests in the same VLAN.

From my guest with address 10.1.193.0 in the VLAN 10 (10.1.0.0/16) :
2768 585.438013697 192.168.255.12 -> 192.168.255.11 TCP 75 [TCP Previous segment not captured] 60200 > 6808 [PSH, ACK] Seq=25799 Ack=25808 Win=1539 Len=9 TSval=1817992589 TSecr=1814224157

The 192.168.255.0/24 network is my other balanced-alb bond interface for my management network (VLAN 255) between the frontends and the hosts. It is very weird to see this traffic in the guest.

I’m doing a lot of testing and swtiching the bridge interface from balanced-alb to 802.3ad+LACP seems to make a difference. I receive a lot less undestined unicast packets but I still receive some of them.

Should I try to use iptables to block trafic from other VLAN (other than 10) to the bridge interface on the host? Are ebtables rules needed?

Can you share with us the output of brctl show on the host and onevnet show -x for a couple of virtual networks that are problematic?

brctl show on the host : http://pastebin.com/y6nm73Wx

onevnet show for the network (10.1.0.0/16) on which I can see the 192.168.255.0 trafic : http://pastebin.com/gGNureru

Also, is it normal that the ebtables filter list are empty?

yes, tagging happens in kernel user space. please check that the
interfaces are actually tagged (e. g. ip link) and traffic leaving the
host include the 802.1q header with the corresponding vlanid

I can see the VLAN-ID on the ONE created virtual interface :
21: trunk.10@trunk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master onebr.10 state UP
link/ether 0c:c4:7a:33:fb:30 brd ff:ff:ff:ff:ff:ff promiscuity 1
vlan protocol 802.1Q id 10 <REORDER_HDR>
inet6 fe80::ec4:7aff:fe33:fb30/64 scope link
valid_lft forever preferred_lft forever

And when leaving the host, the packet is tagged :
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 10
000. … … … = Priority: Best Effort (default) (0)
…0 … … … = CFI: Canonical (0)
… 0000 0000 1010 = ID: 10
Type: IP (0x0800)

It seems that OpenNebula part is ok, bridge are created, and interfaces
tagged, and attach to the proper link. I’d review the host part: trunk
interface is attach to anyother bridge, or any of the interfaces bonded?
are the VMs doing also tagging? Probably to debug your host configuration
is better to take a step back , and find out why your are seein tagged
traffic in other tagged interfaces, i.e. why are you seeing traffic from
VLAN 255 in trunk.10 (you can do this without VMs)

Ruben, Jaime, thank you very much for your help.

I will try to debug it in the next days and your recent comments help me a lot.