VM NIC MODE TYPE default virtio but switch to rtl8139

Dear all,

version : OpenNebula 6.6.0 CE.

I have a surprising behavior since few days with my networks.
I have two networks one for public and one for private (ovs-bridge vxlan) on my cluster.
Since the beginning of the week, I’ve had a bad NIC type for the interface in my kvm machine.
The default in virtio in all conf file, but when I create a vm, the choosen interface by ONE is rtl8139.

My NIC part of the vmm_exec_kvm.conf

NIC = [
    MODEL = "virtio"
    # FILTER = "clean-traffic"
]

xml of the VM:q

 <interface type='bridge'>
      <mac address='02:00:ac:10:10:0a'/>
      <source bridge='ovs-br2'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='0dbb55b4-57e0-4dd2-8250-828547000a9e'/>
      </virtualport>
      <target dev='one-4-0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='02:00:c0:a8:70:cf'/>
      <source bridge='br2'/>
      <target dev='one-4-1'/>
      <model type='rtl8139'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </interface>

Last week, virtio was used instead of rtl8139. The lonely thing that I did, is to use terraform to create my last vm.

Do you have any idea, why rtl8139 is used ?
Is it possible to check in the database if something overwrite the vmm config ?

Best regards

1 Like

Same behavior for me on same version of ONE.

You can observer same problem when attaching new NIC to existing VM, configuration in /var/lib/one/remotes/etc/vmm/kvm/kvmrc is ignored too.

IMHO this is ONE bug, not sure if it will be fixed with 6.8, but I suppose we can’t expect fix on 6.6 CE :frowning:

Hi @kepi!

Could you share your VM template indicating the NIC you have connected? It would also be helpful if you can share your VNET configuration with the onevnet show -x <vnet_id command.

Best,
Victor.

Hi @vpalma

I don’t think it will help, as there is really nothing related to this behavior. I created completely same VM before and after upgrade and model of interface changed. It also happens when attaching NIC to existing VM. On that same VM I attached multiple NICs from same VNETs on 6.4 and all were correctly as virtio.

VIRTUAL NETWORK 75 INFORMATION
ID                       : 75
NAME                     : something-mgmt
USER                     : oneadmin
GROUP                    : something
LOCK                     : None
CLUSTERS                 : 0,100
BRIDGE                   : ovs-pub
STATE                    : READY
VN_MAD                   : ovswitch
VLAN ID                  : 3002
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : um-
OTHER                    : ---

VIRTUAL NETWORK TEMPLATE
BRIDGE="ovs-pub"
BRIDGE_TYPE="openvswitch"
GUEST_MTU="1500"
NETWORK_MASK="16"
OUTER_VLAN_ID=""
PHYDEV=""
SECURITY_GROUPS="0"
VLAN_ID="3002"
VN_MAD="ovswitch"

ADDRESS RANGE POOL
AR 0
SIZE           : 100
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         02:00:0a:81:02:64                  02:00:0a:81:02:c7
IP                               10.129.2.100                       10.129.2.199


LEASES
AR  OWNER                    MAC           IP PORT_FORWARD   IP6
0   V:466      02:00:0a:81:02:67 10.129.2.103     -     -

VIRTUAL ROUTERS

VIRTUAL MACHINES
UPDATED        : 466
OUTDATED       :
ERROR          :

and example of VM template

TEMPLATE 330 INFORMATION
ID             : 330
NAME           : debian-11
USER           : oneadmin
GROUP          : something
LOCK           : None
REGISTER TIME  : 03/12 18:55:14

PERMISSIONS
OWNER          : um-
GROUP          : u--
OTHER          : ---

TEMPLATE CONTENTS
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="$name.mgmt.some.address",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="0.5"
DISK=[
  IMAGE_ID="767" ]
GRAPHICS=[
  KEYMAP="en-us",
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
MEMORY="2048"
OS=[
  ARCH="x86_64",
  BOOT="disk0" ]
SCHED_DS_REQUIREMENTS="ID=\"112\""
VCPU="1"

I’m attaching NIC after instantiating the VM.

This gets more dangerous than I originally thought. Same problem happens with other settings from vmm_exec_kvm.conf, at least with FEATURES.

I’m not yet sure about other conditions, but at this moment, it seems that some VMs are started without default FEATURES, so in our env missing ACPI, GUEST_AGENT etc. and with rtl8139 as NIC model.

Only working solution I found for now is to use onedb update-body vm --id .... to add NIC models and onevm updateconf to manually set MACHINE, ACPI etc…

If more information is needed, let me know. I would like to resolve this, but I don’t know ONE code enough to be of much help.

I’m not sure if this can be some misconfiguration on our part, as I can’t belive this wouldn’t kick much more ppl in the … But I checked what I could many times.

And yet, 1001 attempt to check configs with once again checking all diffs yields the result… At least I’m happy that I’m only one to blame :confused:

There was syntax error in vmm_exec/vmm_exec_kvm.conf, which probably caused whole file to be ignored. I’m 99 % sure that there was still problem one some VMs with rtl8139 nic, but on ones I tested now, everything seems fine.

Is there some tool to check all config files for syntax errors? I didn’t find anything mentioned in the documentation. Config files seems almost like ruby, but not 100 % compatible IMHO.

@o.mbarek try to check your file for any possible error, like forgotten comma in my case. If you find anything, just fix it, restart opennebula and try again.

You should check /var/lib/one/remotes/etc/vmm/kvm/kvmrc too
the default NIC model for hot attaches is defined there.

I hope this helps.

Best Regards,
Anton todorov

Thanks @atodorov_storpool but both mine and OP’s problem had been also with creating the VM, not only attaching NIC while it is running.

I’ll update if I found another VM which will cause trouble after yesterday’s config fix. Before the bug I introduced, it was only single VM causing this problem from about 100 more in cluster.