Addon-kvm-sr-iov and 5.0

Quick question on if I should expect that the sr-iov addon to work in 5.0 or if it’s going to need to be made compatible with 5.0?

Hi Jeff,

The plan is to integrate this functionality upstream in OpenNebula 5.0
branch, this will happen in the short-term.

Cheers

Ruben

Hi @ruben !

did this integration into 5.x already happen?
If I google for OpenNebula SR-IOV I just run into info about the addon, but not news / changelog posts.

I just randomly noticed I upgraded enough to be able to use it and it would be quite cool from a performance point of view. (Even more if someone buys me a new brain that can understand how to do vNIC-based switching using Intel DPDK)

More seriously though - what’s the best practice right now? Add the addon or just set it up in ONE because it’s integrated?

Hi,

It looks like the branch started a long time ago, but there is still no support for SR-IOV and we haven’t it in the current version of opennebula 5.6.X.
In the release of the new version also did not see a mention about it. What are the real plans for including this technology in support?
While we are waiting for official support for SR-IOV, I tried to play with this driver:https://github.com/OpenNebula/addon-kvm-sr-iov
with opennebula v5.4.13. The driver itself works, but I had to tinker a bit with this when I started it. Network adapter (VF) showed in a virtual machine(VM). Here are the some results:

Configuration of VM: 5СPU, 5GB_RAM, Centos7_64

[root@localhost ~]# lshw -c network -businfo
Bus info          Device      Class      Description
====================================================
pci@0000:00:03.0              network    Virtio network device
virtio@0          eth0        network    Ethernet interface
pci@0000:00:05.0  ens5        network    NetXtreme II BCM57810 10 Gigabit Ethernet 

Virtual Function

[root@localhost ~]# ethtool ens5
Settings for ens5:
    Supported ports: [ ]
    Supported link modes:   Not reported
    Supported pause frame use: No
    Supports auto-negotiation: No
    Advertised link modes:  Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: Other
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: off
    Current message level: 0x00000000 (0)

    Link detected: yes 

For testing the network interface ens5(VF), I used the iperf3 utility.

Results:
https://snapshot.raintank.io/dashboard/snapshot/HhBmjRy048waWfpoUfUBAA6kKvKtVcRt?orgId=2

The speed of 10GB / s is reached, but the CPU load is 100% (into VM), i.e. one core is fully occupied. I expected that with the use of SR-IOV technology, the CPU load would drop by up to 15%. Perhaps someone from the forum has experience in using this technology. Share your results with CPU load. Is it may be high due to incorrect VF settings?

Thank you.

It is supported since a couple of releases:

http://docs.opennebula.org/5.6/deployment/open_cloud_host_setup/pci_passthrough.html#usage-as-network-interfaces

Hi @ruben,

Looks like I was looking bad. Thanks for the link, it will be interesting to compare the results.

Great @telecast, keep us updated with your testing!

PCI Passthrough (opennebula 5.6.2):
The iperf3 utility was used to test the network.
The parameters of VM: 5CPU, 5GB RAM, Centos7_64


[root@localhost ~]# lshw -c network -businfo
Bus info          Device      Class      Description
====================================================
pci@0000:01:01.0  enp1s1      network    NetXtreme II BCM57810 10 Gigabit Ethernet Virtual Function

[root@localhost ~]# ethtool enp1s1
Settings for enp1s1:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 10000Mb/s
        Duplex: Full
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Current message level: 0x00000000 (0)

        Link detected: yes

The following results were obtained:

  1. Speed limit reached 10GB/S
  2. CPU load is 100% i.e. fully loaded one core.
    https://snapshot.raintank.io/dashboard/snapshot/PyGjMFBTJut6lk150AGVajVi4lcbaf8J?orgId=2

Expected to reduce the load on the CPU when using technology SR-IOV.