Using Virtual Netorks imported form VMWare

I’ve 2 VMs:

  • A VM deployed in Opennebula
  • A VM deployed in VMWare and imported to Opennebula (so, not a wild VM, an importedVM).

I’ve added some VMWare vCenter networks (for example, the Distributed Port Group ‘DPG_SHARED’) to Opennebula through the command “onevcenter networks”.

Both VM are working fine without NICs attached.

The problem is when I add the NIC ‘DPG_SHARED’ to the VM deployed in Opennebula i get the following error:

Mon Oct 31 17:56:27 2016 [Z0][ReM][D]: Req:4896 UID:0 VirtualMachineInfo invoked , 4
Mon Oct 31 17:56:27 2016 [Z0][ReM][D]: Req:4896 UID:0 VirtualMachineInfo result SUCCESS, "40…"
Mon Oct 31 17:56:28 2016 [Z0][VMM][D]: Message received: LOG I 44 Successfully execute transfer manager driver operation: tm_context.
Mon Oct 31 17:56:28 2016 [Z0][VMM][D]: Message received: LOG I 44 ExitCode: 0
Mon Oct 31 17:56:28 2016 [Z0][VMM][D]: Message received: LOG I 44 Successfully execute network driver operation: pre.
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG I 44 Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/105/44/deployment.16’ ‘192.168.1.116’ 44 192.168.1.116
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG I 44 error: Failed to create domain from /var/lib/one//datastores/105/44/deployment.16
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG I 44 error: Cannot get interface MTU on ‘DPG_SHARED’: No such device
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG E 44 Could not create domain from /var/lib/one//datastores/105/44/deployment.16
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG I 44 ExitCode: 255
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: LOG I 44 Failed to execute virtualization driver operation: deploy.
Mon Oct 31 17:56:29 2016 [Z0][VMM][D]: Message received: DEPLOY FAILURE 44 Could not create domain from /var/lib/one//datastores/105/44/deployment.16

But I can attach this same NIC (DPG_SHARED) to the other VM (the VMWare VM imported to Opennebula) and works just fine.

So, my question is: Networks improted from VMWare can’t be used also with Opennebula VM? I need to create different Virtual Netowkrs for KV amb VMWare environments?

Thanks!

Hi Mistral,
when you say a VM deployed in OpenNebula I understand that you mean a VM deployed in a KVM hypervisor.

vCenter resources like virtual networks, virtual machine templates, datastores… can be imported to OpenNebula as you’ve mentioned, using the onevcenter command or from Sunstone’s using the vCenter view (you can click on the user names on UI’s top right and select Views -> admin_vcenter) with the Import buttons. When you import vCenter resources, a reference to those resources is added to the OpenNebula resources so it knows how to use them.

When you deploy the VM in a KVM and try to attach a vCenter NIC I’m afraid that it fails as vcenter networks cannot be used from VMs that you deploy in KVM nodes as it would happen with KVM networks using bridges which could not be used from VMWare, it’s not an OpenNebula’s issue it’s a compatibilty problem.

So, they are not compatible due to hypervisor differences and therefore you should create different Virtual Networks for KVM and VMWare environments.

Cheers!

Yes, sorry, I meant KVm hypervisor.

Okay I understand. Maybe was my fault or maybe this is not clear enough in documentation. Anyway, thanks for the clarification.

So, if my infraestructure is divided between VMWare and KVM, and I want to have hosts in every VLAN for both hypervisors I will need to have the double of Networks (de double of switches in my case), that is 5 Networks for VMWare VMs and 5 Netowkrs for KVM VMs if my infraestrucutre have 5 VLANs. Is that right?

Thanks.

Also, this means that every time I make achange on my VMWare topology from vCenter or ESXi, i need to reimport theese networks in Openenbula?

There is no way to just “update” the old Network config with the new one?

Thanks.

Don’t be sorry I just wanted to be sure that you were using KVM and that I understood the problem.

You’re right, if you want to use both hypervisors you would need a virtual network usable with KVM (bridge, openvswitch, 802.1q…) and a virtual network usable by vCenter. If you have 5 VLANs you could create 5 virtual networks for each hypervisor but the good part is that with KVM you can use 802.1q, openvswitch… so you don’t need 5 different physical NICs if you don’t want to, thanks to trunking.

If you need more info don’t hesitate, we’re here to help.

See ya!

Hi!
if you add a new VMWare network in vCenter you should import it, it’s not updated automatically. However if you update a previously imported virtual network like a port group, as OpenNebula has a reference to that object with an UUID I think you shouldn’t have to update as OpenNebula will use that reference.

I think that for future releases networking for vCenter will be redesigned in OpenNebula.

Cheers!

And these are the same networks actually. If there are 5 vlans where your VMs will reside, you need 10 instances in ONE (5 for kvm and 5 for vmware) because of how hypervisors work. But on your physical network there are still 5 vlans only, not 10.
In other words if you have vlan 10, for instance, your VMs working on KVM-based vlan 10 and vmware-based vlan 10 will talk just fine.
You just have to have separate vNETs because ONE is controlling both hypervisors and these guys are completely different beasts , hence the need of “double amount” of logical constructs (vNETs).