Virtual machine network does not work

Hi forum,

I have created a VM but network does not work.I cannnot get a success ping.

Node network config:

auto lo
iface lo inet loopback
auto eth2
iface eth2 inet manual
bond-master bond0
auto eth3
iface eth3 inet manual
bond-master bond0
auto bond0
iface bond0 inet manual
bond-slaves none
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate 1
auto bond0.11
iface bond0.11 inet static
address 192.168.11.74
netmask 255.255.255.0
network 192.168.11.0
broadcast 192.168.11.255
gateway 192.168.11.1
vlan-raw-device bond0
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 8.8.8.8
dns-search vozelia.com
auto bond0.528
iface bond0.528 inet manual
vlan-raw-device bond0
auto br528
iface br528 inet manual
bridge_ports bond0.528
bridge_stp off
bridge_fd 0

$:/var/log# brctl show
bridge name bridge id STP enabled interfaces
br528 8000.002481a7f467 no bond0.528
vnet0 (Virtual machine interface)

$:/var/log# virsh list
Id Name State

4 one-27 running

Virtual network template:

VIRTUAL NETWORK TEMPLATE
BRIDGE="br528"
DESCRIPTION="X.X.X.X/27"
DNS="8.8.8.8"
FILTER_MAC_SPOOFING="YES"
GATEWAY="X.X.X.X"
NETWORK_ADDRESS="X.X.X.X"
NETWORK_MASK=“255.255.255.224"
PHYDEV=”"
SECURITY_GROUPS=“0"
VLAN=“NO"
VLAN_ID=””

If I set up an alias interface in node I get a success ping. On the other hand, I can see packets in vnet0 while I am pinging. (See RX packets)

vnet0 Link encap:Ethernet HWaddr fe:00:25:8b:79:83
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:139 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:6378 (6.3 KB) TX bytes:0 (0.0 B)

I think someting is wrong between VM and node. VM is a template I got from marketplace:

http://marketplace.opennebula.systems/appliance/521655558fb81d2be7000003

Regards.

it’s kinda hard to understand your network config from just an interfaces-file… some background about it comes in handy when other people want to understand what’s going on :slight_smile:

Anyway, if you cant ping outside the IP block, maybe the gateway can’t be reached.
You specified a /27 with subnetmask 255.255.255.224, but you didnt add GATEWAY and NETWORK_ADDRESS, maybe those are outside your IP block ?
Can you reach the gateway from the VM ?

Hi Roland.

I cannot to reach IP gateway.

As I said, network configuration is ok If I set up an IP on node.

What additional configuration files you need for understand my topology?

what do you want to achieve by bonding eth2+eth3 and then dividing them with subinterfaces, into bond0.528 and bond0.11 ? Your life would be a lot easier if you use eth2 for the 192.168.11.X IPs and then use eth3 for the bridge to the VM network. If that works ok, you can do the bonding afterwards. Then you know your opennebula virtual network is usable.

For the virtual network, you specify a 27, which makes you use a netmask of 255.255.255.224 and gives you 32 IPs.
If your defined gateway is outside that IP block, you cant reach it. Could you specify what you used in the X.X.X.X. for:

DESCRIPTION="X.X.X.X/27"
GATEWAY="X.X.X.X"
NETWORK_ADDRESS=“X.X.X.X”

If I use bonding (802.3ad mode) I get more redundancy, load balancing, bandwidth… Possibily in the future I will need more networks with new vlans.

Description="10.0.10.128/27"
Gateway=“10.0.10.129"
Network_address=10.0.10.128”

Obviously, I am trying to reach a IP of my network.

I can see grow up the packets in the bondings and bridges but any more. I tried to use “default” and VXLAN drivers.

This might not be it, but worth a check;
shouldnt it say “bond-slaves eth2 eth3” in stead of “bond-slaves none” ?

Bonding is working fine. You can set up “bond-slaves none” next to “vlan-raw-device bond0”. Check:

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 17
Partner Key: 5
Partner Mac Address: 00:18:b1:e4:d0:00

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:24:81:a7:f4:67
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:24:81:a7:f4:66
Aggregator ID: 2
Slave queue ID: 0

I think the problem is in the driver or virtual network configuration…

The problem was solved. A switch had not the VLAN added.

On the other hand, when I create a new VM and It gets a free IP this VM can change the IP and manually set up another. Is this right?