OpenNebula 5.0 and the new RPC API methods

Hi all

I were testing the new RPC calls and it seems that some of them have changed so we have found some backwards incompatibilities between the 4.x and 5.x versions. In our case the most important change is related with the host management.

it seems that one.host.allocate has changed since the last version:
http://docs.opennebula.org/5.0/integration/system_interfaces/api.html
http://docs.opennebula.org/4.12/integration/system_interfaces/api.html

Using the 4.x rpc call we get this error:

[ERROR] _rpc failed to make request faultCode -501 faultString Parameter that is supposed to be integer is not method one.host.allocate args [string, hyp101.altaria.os], [string, kvm], [string, kvm], [string, ovswitch], [int, -1]

The reason is that vn_mad is not used by the new onehost command, and also the VNET template should be changed. This is done on purpose? It’s not commented in the compatibility guide and it breaks the compatibility with some RPC clients.

This change also raises another question, how is possible now to setup an heteregenous hyp cluster with OVS and linux bridges hyps? In OpenNebula 4.x we were able to setup the vn_mad for each host, so we were able to include linux and OVS bridges in different hypervisors, thanks to OVS_BRIDGE variable, as example we had for an specific
VNET:

NAME = "vscnetwork"
BRIDGE="br101"
OVS_BRIDGE="ovsbr"
VLAN="YES"
VLAN_ID=“295”

and we had hosts with dummy and also hyps with ovsswitch network drivers, so OVS_BRIDGE was used for OVS hyps, how is possible to keep this heterogeneous configuration now? It’s still supported?

Thanks in advance!
Alvaro

Hi Alvaro

About the XML-RPC API, you are totally right, we missed the compatibility
notes about that. Sorry for the inconveniences

http://dev.opennebula.org/issues/4588

and this

http://dev.opennebula.org/issues/4600

About the OVS_BRIDGE parameters, now the VN_MAD drivers are attached to the
network. This way a host can implement multiple network types, (as opposed
to the OVS_BRIDGE parameter to implement the same network with different
drivers, but only to some extent).

So you can have openvswitch networks and linux bridges networks, and each
host can implement any of them. If you only have support for a given
network type on a host, you can create a cluster to be sure that only VMs
of a given type are allocated to that host (in 5.0 you can add a VM to
multiple clusters).

Cheers

Ruben

Hi @ruben

Thanks a lot for your feedback! We will update our vnet templates to include the new parameter.

About the RPC changes that means that the opennebula clients must be updated to support ONE 5.0 right?

Cheers
Alvaro

About the RPC changes that means that the opennebula clients must be
updated to support ONE 5.0 right?

Yep, right

Visit Topic
http://forum.opennebula.io/t/opennebula-5-0-and-the-new-rpc-api-methods/2431/3
or reply to this email to respond.

In Reply To
ruben http://forum.opennebula.io/users/ruben Ruben S. Montero
http://forum.opennebula.io/users/ruben Developer
July 1
Hi Alvaro About the XML-RPC API, you are totally right, we missed the
compatibility notes about that. Sorry for the inconveniences
http://dev.opennebula.org/issues/4588 and this
http://dev.opennebula.org/issues/4600 About the OVS_BRIDGE parameters,
now the VN_MAD drivers are attached to th…

Visit Topic
http://forum.opennebula.io/t/opennebula-5-0-and-the-new-rpc-api-methods/2431/3
or reply to this email to respond.

You are receiving these emails cause you have enabled mailing list mode.

To unsubscribe from these emails, click here
http://forum.opennebula.io/email/unsubscribe/c3c1a8f6afe128a1c69e964853b9a61af953141f804c06b53ae6b138edcc3e7b
.

Alternatively you may click here
<reply@forum.opennebula.org?subject=unsubscribe> to unsubscribe via email.

Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmontero@opennebula.org | @OpenNebula

Hi,

We have added the xml-rpc changes to the compatibility guide:

http://docs.opennebula.org/5.0/intro_release_notes/release_notes/compatibility.html

Cheers.

Ok, thanks a lot