Using one of two sockets


I would like to know if OpenNebula allows a configuration way to use only one of two sockets. My server is a dual socket CPU with 2 32-cores CPUs. Now, I need to “disable” one socket (reserved now to SLURM scheduler), so I need to configure OpenNebula to show only half of CPUs.
I have seen in “Infrastructure->Host” menu that I can share less CPUs, but I NEED that all cores shared in OpenNebula belong to the same CPU.


Hi @Daniel_Ruiz_Molina ,

If you are asking for KVM HV you could use Linux cgroups to assign processors to the machine.slice where the VMs are running. And use the CPU over subscription option in OpenNebula to extract the number of reserved cpus.

Here is an example for CentOS 7. For other OS you should configure cgroups following OS docs though.

Unfortunately systemd is not cgroup.cpuset aware so you must install some legacy tools first:

yum install libcgroup libcgroup-tools

Locate the CPUs that you will assign for VMs. In the example I’ll use second socket (numa node 1):

# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 20 21 22 23 24 25 26 27 28 29
node 0 size: 96938 MB
node 0 free: 90652 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39
node 1 size: 98304 MB
node 1 free: 80030 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 

Then create cgroup configuration file (leaving the memory from both sockets)

cat >/etc/cgconfig.d/machine.slice.conf <<EOF
group machine.slice {
    cpuset {
systemctl enable cgconfig.service

I’d recommend to reboot the host to be sure that the configuration is OK on boot.

To update the reservations use the formula RESERVED_CPU = NrCPUs * 100

echo "RESERVED_CPU=2000" >./host.template
onehost update $HOST_ID --append ./host.template

Now OpenNebula’s scheduler will know that you have only 20 CPUs for VMs.

Hope this helps.

Best Regards,
Anton Todorov


Thanks a lot!!!

But one question… I’m very newbie about “cgroups”. After reading your post, I don’t see where the “connection” is between cgroups configuration file (for the machine.slice) and Opennebula. In others words, I’m understanding that machine.slice.conf is applied for all services, not just OpenNebula, isn’t it?

Is there any configuration way for that slice for limiting only "cpuset.cpus=“10-19,30-39” to opennebula daemons?


OpenNebula is not involved at all because it is also not cgroups- aware :slight_smile:

The configuration is done on the host level. The libvirtd daemon start the VMs in machine.slice. So we are defining which CPUs (threads actually) will be used for VM processes.

OpenNebula will continue to receive the full count of CPUs via the monitoring probes. With RESERVED_CPU option we tell OpenNebula to exclude the given number of CPUs from the number received via the monitoring.
This way OpenNebula will “know” the max number of CPUs(threads) that VMs will use.

I am curious how do you reserve CPUs for the “SLURM scheduler” then? :confused:



by the moment, I’m not “reserving” CPUs for the SLURM scheduler. Simply, I have configured in “slurm.conf” my server with half of the total amount of CPUs (NodeName=myserver CPUs=32 SocketsPerBoard=1 CoresPerSocket=16 ThreadsPerCore=2 RealMemory=515703 TmpDisk=270000).
With this line, SLURM will only take 32 cores, but I am not sure if that 32 cores will be from the same socket (I suppose so because I’m detailing “SocketsPerBoard=1”… but what socket?).

Now, if I apply cgroups configuration you sent me, I will be able to detail what cores (0-2-4-…) because with “numactl -H” I can get core and socket distribution… So now, my “problem” is how force SLURM to use first or second socket.