WILD VM that are NOT imported resource tracking

Hello,

We have a use-case where another service outside of Open Nebula will add KVM VMs to shared hosts also under control of Open Nebula.

I like the Wild VM concept and I see the Open Nebula monitored host noticed the VM as Wild. As best I can tell while Open Nebula host monitor finds the VM and its resource allocations, it does NOT use those values in reporting the KVM hosts total allocated memory and cpu?

Is that correct? I even check debug schedular logs on allocation of a new Open Nebula created VMs. The Scheduler data used to select host/resources statistics don’t include the Wild VM when evaluation host resources. (It appears)

Additional questions below with details.

clip from ‘onehost show 20’ (BEFORE IMPORT)

ID                    : 20                  
NAME                  : s002-m5-40g-kvm27
CLUSTER               : onprem-cluster-laas-1a
STATE                 : MONITORED           
IM_MAD                : kvm                 
VM_MAD                : kvm                 
LAST MONITORING TIME  : 12/05 19:31:00      

HOST SHARES                                                                     
RUNNING VMS           : 4                   
MEMORY                                                                          
  TOTAL               : 503.5G              
  TOTAL +/- RESERVED  : 503.5G              
  USED (REAL)         : 7G                  
  USED (ALLOCATED)    : 12.8G               
CPU                                                                             
  TOTAL               : 9600                
  TOTAL +/- RESERVED  : 9600                
  USED (REAL)         : 0                   
  USED (ALLOCATED)    : 1550                

LOCAL SYSTEM DATASTORE #133 CAPACITY                                            
TOTAL:                : 97.9G               
USED:                 : 18.9G               
FREE:                 : 74G        

WILD VIRTUAL MACHINES

NAME                                                      IMPORT_ID  CPU     MEMORY
Deployer_1.29_2024.03.1.i bf3ec87a-1a6e-4cdc-a36d-e86a070fbeba   30      24576

VIRTUAL MACHINES

  ID USER     GROUP    NAME                                                  STAT  CPU     MEM HOST                                      TIME
 257 oneadmin oneadmin kentest2                                              runn  0.5    768M s002-m5-40g-kvm27.mitg-bxb300.cisco   0d 00h50
 254 oneadmin oneadmin ToolsHosts_2_(service_90)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 06h45
 252 oneadmin oneadmin ToolsHosts_0_(service_90)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 06h45
 250 oneadmin oneadmin ToolsHosts_1_(service_89)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 06h54

scheduler log: ( force placement of a new VM to see if scheduler totals would take the WILD vm into account.)

ID          : 20
CLUSTER_ID  : 117
PUBLIC      : 0
MEM_USAGE   : 13369344   <<< is same 13,356MB as show on OpenNebula VM onehost above, so no accounting for the Wild VM.
CPU_USAGE   : 1550
MAX_MEM     : 527984052
MAX_CPU     : 9600
FREE_DISK   : 75729
RUNNING_VMS : 4

Is there a configurable to have the Wild resources accounted for in host totals reported by Open Nebula?

Once I “import” the wild VM then, yes its accounted for. Thats an extra step and a nice option. I could live with that but noticed this is the v6.10.0 documentation

This command is deprecated and will be removed in future release. Imported VMs will be removed from OpenNebula management and will appear again as wild VMs on the host.

Once its Depreciated, will the Wild Resources be accounted for?

Imported wild VM and now Totals are correct:

clip from ‘onehost show 20’

MEMORY                                                                          
  TOTAL               : 503.5G              
  TOTAL +/- RESERVED  : 503.5G              
  USED (REAL)         : 9.7G                
  USED (ALLOCATED)    : 38.8G               
CPU                                                                             
  TOTAL               : 9600                
  TOTAL +/- RESERVED  : 9600                
  USED (REAL)         : 0                   
  USED (ALLOCATED)    : 4600  

NAME                                                      IMPORT_ID  CPU     MEMORY
Deployer_1.29_2024.03.1.i bf3ec87a-1a6e-4cdc-a36d-e86a070fbeba   30      24576

VIRTUAL MACHINES

  ID USER     GROUP    NAME                                                  STAT  CPU     MEM HOST                                      TIME
 260 oneadmin oneadmin Deployer_1.29_2024.03.1.i12-K8Only               runn   30     24G s002-m5-40g-kvm27.mitg-bxb300.cisco   0d 00h02
 259 oneadmin oneadmin new VM                                                runn  0.5      2G s002-m5-40g-kvm27.mitg-bxb300.cisco   0d 00h08
 257 oneadmin oneadmin kentest2                                              runn  0.5    768M s002-m5-40g-kvm27.mitg-bxb300.cisco   0d 01h03
 254 oneadmin oneadmin ToolsHosts_2_(service_90)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 06h58
 252 oneadmin oneadmin ToolsHosts_0_(service_90)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 06h58
 250 oneadmin oneadmin ToolsHosts_1_(service_89)                             runn    5      4G s002-m5-40g-kvm27.mitg-bxb300.cisco   1d 07h07
root@opennebular-1:/var/log/one# 


I had assume since the CLI currently has “onehost importvm”, the the APIs do as well on v6.10.0.

Checking the documentation I don’t see it listed like I normally do for matching CLI commands.

https://docs.opennebula.io/6.10/integration_and_development/system_interfaces/api.html

I don’t see the “import” keyword anywhere in the document.
Please advise proper API command as well.

Thanks!!!

–Ken

The CLI onehost importvm doesn’t exists in the xml-rpc API.

It’s just a ruby method, which extract the info about wild VM from the Host, creates new VM based on wild VM template and call ‘fake’ deploy action.

HI Pavel, Thanks for confirming why I couldn’t find the importvm API call.
I’m ok with the VMs running in the “wild”, but would need the Scheduler to take those resources into account when Open Nebula allocates new VMs. Is there a configurable for that?

As best I can tell those resources aren’t accounted for until brought in from the Wild.

Thanks for your help!

–Ken

Sorry I missed the first post.

The host allocated memory and cpu are only resources allocated by the OpenNebula. There is no configuration to include wild VMs.

There is no plan to include wild VMs resources to Host allocated cpu and memory.

In 7.0 release the importvm feature will be removed, because the management of imported VMs is limited as we do not track the wild VM resources like disks and NICs.

HI Pavel,

Thank you for the response. We’ll stay away from importing Wild VMs and look to collect the host resources with “Wild” VMs in another way outside of Open Nebula.

–Ken

You are not out of options, but it needs some development.

An alternative is to play with the host overcommitment - RESERVED_CPU and RESERVED_MEM.

You could hard-code reserved resources for the other service per host if you don’t need such tight resource management.

Otherwise, you could develop a script that checks the resources for the other VMs allocated per host and updates RESERVED_CPU and RESERVED_MEM accordingly. So, the scheduler will take it into account accordingly. The script could be triggered by another service outside of OpenNebula when the VMs change their state(started, stopped, etc.), run periodically by cron/systemd timer, etc.

Another alternative is to develop own scheduler that considers your needs.

Hope this helps,

Best Regards,
Anton Todorov