Scheduler should go through SCHED_REQUIREMENTS before checking CPU capacity

The logic in logs is confusing us. We have 2 clusters, one of which has macMinis and the other one has normal PC hardware running KVM. If I want to launch a macOS VM, it is flagged as “parallels” and if it’s a linux I want to run, it’s flagged “kvm”. However, the scheduler doesn’t first look at these requirements, but instead checks if the CPU of a host is sufficient, and prints something like:

Host 85 discarded for VM 2230871. Not enough CPU capacity: 400/0

Now, even if the CPU capacity would have been enough, it would then have printed something like:

Wed Aug 4 07:53:37 2021 [Z0][SCHED][D]: Host 49 discarded for VM 2230701. It does not fulfill SCHED_REQUIREMENTS: (CLUSTER_ID = 100) & !(PUBLIC_CLOUD = YES) & !(PIN_POLICY = PINNED) & ( (CLUSTER_ID = 100) & (HYPERVISOR = parallels) )
Wed Aug 4 07:27:23 2021 [Z0][SCHED][D]: Host 79 discarded for VM 2230528. It does not fulfill SCHED_REQUIREMENTS: (CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES) & !(PIN_POLICY = PINNED) & ( (CLUSTER_ID = 0) & (HYPERVISOR = kvm) )
depending on what we wanted.

The scheduler log takes more time to search through and analyze now that the first and obvious reason is hidden behind the CPU check.

Would it also to be faster to just check the requirements instead of querying for the CPU load?

I’m guessing the part that should be performed later is one/ at one-6.0 · OpenNebula/one · GitHub and one/ at one-6.0 · OpenNebula/one · GitHub should be performed before it.