V.5.10 is coming quickly

:fire:OpenNebula 5.10 is almost ready!:fire:

Get a Sneak Peak of a few of the new features. #CPUpinning #NUMA #hugepages #VirtualTopologies

https://opennebula.org/cpu-pinning-numa-virtual-topologies-for-vms/

The appearance of such functionality
This is great news

Hello, great work! I tested it in the lab, and it is working good, but have one problem with this setup.

I have mixed hosts in infrastructure. Some have pinning enabled, and some not. Pinning and topology setup is suitable for some usages, but not all, because of missing ability to overprovision.

To gain the best performance in CPU overprovision setup, I use following defaults in vmm_exec_kvm.conf file:

RAW      = "
<cpu mode='custom' match='exact'>
    <model fallback='forbid'>Haswell-noTSX-IBRS</model>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='rdrand'/>
    <feature policy='require' name='pcid'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='md-clear'/>
</cpu>
<numatune>
    <memory mode='strict' placement='auto'/>
</numatune>
<memoryBacking>
    <hugepages/>
</memoryBacking>
<devices>
    <memballoon model='none'/>
</devices>
"

but I hit problem with incompatibility with OpenNebula Numa topology setting, because there is conflict in <numatune> and <memoryBacking> XML nodes. The only solution is to define this XML per VM basis in the VM template, which is not handy.

I am calling for extending this issue to have an ability set defaults when VM is deploying to non-pinning hosts. Probably extending mentioned config vmm_exec_kvm.conf or some per-host configuration?

Thank you in advance for considering this.

Hello @feldsam,
I noticed youā€™ve already merged Cluster and Host-wide KVM configuration to your repository, so you are now able to specify the RAW attribute per host. Did it help you with your issue? Is the conflict still there?

As your issue is related to https://github.com/OpenNebula/one/issues/3664 Do you still need pin CPU to NUMA node?

Hello @pczerny, thanks for message. I was playing with virsh configuration and figure out, that

<numatune>
    <memory mode='strict' placement='auto'/>
</numatune>

PINs CPU to only one numa node when vcpu count is 1. When VM has more cpus it doesnt work. So only and best solution is to PIN to numa node programatic. VMS_THREADS are fine, but not pins to all cpus in particular numa node. When I was testing ā€œmanualā€ pinning (Automatic vCPU pinning using VM Hooks) anfter few months in production I figure out, that performance on VMs is worst.

So would you please extend NUMA pinning with option to pins to all cpu cores, cpu threads or cores/threads in particullar numa node? Then I can cherry pick changes and test again.